I am not a professional programmer, and CI/CD is outside the bounds of my experience. I will say, that having been trying to get my head around TDD recently, I was immensely frustrated initially at having to stop and scratch my head over how to test something when my brain was already automatically working out implementation details. However, having tried to stick with it, I'm repeatedly struck by how many bugs are caught immediately and corrected effortlessly when a unit test unexpectedly fails, and how quickly I wind up back in the debugging quagmire when I succumb to the temptation to forego the tests and 'just get this feature working quickly'. This has become one of my favourite channels very quickly.
I think TDD is one of those concepts that is very easy for a Dev new to it to overthink and struggle to get their head around; I know I did until I worked my first proper job and saw how they'd implemented Unit Tests and I was taken back by how simple they were. The simplest test to start with is an expected input with the expected output, then expand that to a few invalid inputs with their expected failure handlings. And if you think that set of tests will be covering too much code then it's your clue to split your function into sub-functions, and write more granular tests for them
@@andrewkenworthy7439 This is my experience too. But unfortunately, often an argument to not do unit tests. Summarized in "They are too simple, why have them at all". Or as a colleague recently put it "your unit tests ()for some authorization service) are really only testing setters and getters and an occasional condition. I think they are waste" I could not explain that the code was this simple *because of* TDD. That this entire piece of software turned out "just getters and setters and an occasional condition" because of hours of TDD, because of hundreds of lines of tests that never made the final commit. So yes. TDD is simple. But mostly because it forces you to keep things simple: a circular dependency, in a way.😀
@@berkes I haven't found TDD (or BDD for that matter) to be universally applicable. For some kinds of features, it's very easy to do it the TDD way (anything algorithm related works well IMO). What seems to be universally applicable is the need for tests, so whether you write them before your code or after, doesn't matter that much to me, just that you write them.
Very interesting that there is now data to show quality and delivery speed are synergistic not antagonistic. I am often telling my team "we need to deliver this fast, so we can't afford to do anything shoddy, otherwise we WILL be late". I guess bitter experience has taught me that the only way to hit a hard deadline is to do things right. Sacrifice quality for haste, and your product will be both late AND crap.
My previous PM scolded at us a couple of times when we said we needed time to create more unit testing for more coverage and refactoring due to the removal of some requirement that was no longer necessary. She didnt understand that this was part of our Definition of done so everytime tickets are not done at the end of a sprint she'd be pissed of and ask why why why we said we cant consider it done without proper test coverage... eventhough we have shown demos of the application being fully functional it's not done until all testing is done. glad we dont have her as PM anymore. As a junior it was always a feeling like It was my fault cause she made me feel as if I was dragging the team down. luckily the seniors later on told me she wasn't a good PM and this video confirmed I was right about testing.
I will compromise if I face a hard deadline within the next two days or so, but I really try to avoid it. However my ability to comprimise once in a while is dependent on me really keeping my code clean and well structured at all other times, otherwise my code would turn into a horrible mess.
While I generally agree with most of your videos there's always one question that comes up in my head; Where on earth do you people find these teams where everyone wants to learn and be better? I keep running into teams that during interviews talk vividly about how they have CI/CD, automated tests and all those fancy things, they even show examples and whatnot and then once I start working all of a sudden none of that is important and the only thing that matters is getting the next minor feature out ASAP and the quicker you can shitfix it the better. I've seriously been considering leaving development all together after a couple of years of job hopping from one backwards team to another in frustration over the almost complete lack of interest in improvements. What I've found is that most developers don't seem to care at all about the quality of their work. As long as it does mostly what it says on the tin, they're content.
I've experienced this same problem at nearly every job in software with any (software) language. No-nothing managers make the problem worse by delegating decisions to the team, which guarantees that the majority of mediocre programmers always shout down or outvote those trying to make things better.
@@cloojure Even worse when the know-nothing manager delegates all decisions to the self-learned wonderkid that "gets things done" and is online 24/7 to ffix things that never would have been broken if a semi-competent developer had done it a little slower.
I just went back to see "Execuses" again before moving on to something else. Mr. Farley describes himself at 00:01 as a "Proffesional (sic) Software Developer"... Give me a break... 🤣🤣🤣
I agree about your opinion of feature branching, but not in all cases. Basically two cases: - interns or new employees, who I don't fully trust yet, especially when they touch important pieces of code, which will probably break everything for everyone (also addressed in the video) - experimental features, until I'm sure, it will make it in
"It's not somebody else's responsibility to give us permission to do a good job". I think you need to do a video just on this idea. Robert Martin has discussed this quite a bit with the notion of professionalism and being responsible for the code we write no matter what management asks. As professionals, we need to get better at pushing back. UPDATE: as I think on this, I think this probably one of the most important ideas to pass on to our younger peers. Don't ask for permission to do a good job, ie, pair program, write unit tests, refactor code.
Good explanation. I still have some questions: 1. The code review question was really bugging me in the CD workflow. From experience code reviews done by people NOT involved in developing the code are the most valuable. They will spot things the developer(s) became blinded to or ignored for whatever reason. I am not sure "do pair programming instead" is a convincing answer but ok. I wonder how to convince a company to cut their dev force by half (intentional extreme) and letting them do pair programming because it's faster and more efficient. Is it twice as good? More perhaps? If so it could convince some bosses I guess. 2. The other issue I see is the commit history and how that ties to the work tracker like Jira. Do you hold on commits for when you have larger body of work? Do you commit simple typo fixes? Do you include a JIRA ticket in every commit? "JIRA-1234 fix typo"? Should it be enforced? How do you refactor? Does the entire history of you learning about the problem and refactoring it 3 times get to be immortalized in Git history? People are generally quite bad at writing commit messages if there are no checks for their formatting. I have worked with such code bases and the commit history was just useless. Finding out how certain change came to be was next to impossible due to sheer amount of meaningless commits. It is hard to track down the work item and people involved and the process or even the exact change... and when there is a regulator breathing down your neck... not fun. Feature branches that tie back to work tracker, trackable pull requests (code reviews), squash commits with enforced formatting that also tie back to the tracker solve this quite nicely. But they are incompatible with CD workflow sadly. 3. And one other thing I find quite problematic is how you manage failures? For example Google does not allow things that are not tested in the code base. If something slips in anyway and then breaks they first rollback and then try to figure out what went wrong. In CD workflow you would have the code broken most of the time unless people cobbled somehow their own private CI and run that before committing? Which is probably not always possible or desirable. It is the commit and pipeline that is triggered by it that verifies it. Rejecting it post-commit or automatic rollback are probably not ok in CD. So what happens then? Others wait for the guy who broke it to fix it? And if the "main" is broken most of the time other people like QA might struggle. Is the odd green "build" always shipped? Having confidence that main is "good" is nice as there is a place to fallback to or ship at the end of the day. I understand that gate-keeping is slow but to me the above are valid concerns and I don't understand how CD workflow accommodates them.
1. The data on pair programming says that 2 people complete the same task as one person in 60% of the time, so not 2 for 1, but not faster. But, the quality produced by the pairs is substantially higher. The overall impact is that pairs are at leas as efficient, but probably more efficient than a single. The problem with being more definite than that is that teams that do pairing usually do a lot of other goid stuff too, so you can’t tell the effect of pairing vs other improvements. 2. The commit history still tells the truth, but it is a truth more like a transaction log in an event stream, rather than some kind of time based snapshot. Yes, include a reference to the reason (could be a Jira ticket) in every commit. You can take this further, adopt some conventions for commit messages, and you can programatically recreate clear descriptions for releases. I do a lot of work in regulated industries, we can often auto-generate release notes. 3. Well, part of CI and CD is to work so that the codebase is always good, all tests pass (CI) and work so that your software is always in a releasable state (CD). So no, you can’t knowingly commit code that breaks things! If you break something you “stop the line” and can’t release till you fix or revert the problem, that is what CI (or CD) means. Teams that work this way test nearly everything with automated tests, sounds slow, but is not because you spend time writing tests instead of diagnosing and fixing bugs. Teams that works this way spend 44% more time creating new features than teams that don’t. I have videos that cover all of this stuff on my channel.
The way I understand it. Devs push to a feature or temp branch. A pull request is then made to main (sometimes a Dev branch deployed to a test server for QA). CI pipeline tests and other checks will have been configured to run on the pull request branch. When all checks have passed, only then is the feature branch merged to main (or dev), the pull request is closed and the feature branch deleted (no long lived feature branches)
@@ContinuousDelivery Thanks! I understand it better now. About the last point I still don't quite get it. You can run unit tests locally but until you commit and trigger the CI pipeline that does the integration and run integration/system tests you won't know if the system will break by your change or not. So you are not committing bad stuff knowingly but it will inevitably happen. What I found the most difficult when I was once remotely moving towards this (I work in DevOps) was the immense pushback from everyone. We could not afford (or thought we could not) any breakage of the mainline. It had to be always good so it could receive pre-validated changes from anyone, it could be taken by QA or released at any time. Every once in a while someone still managed to break it which resulted in sometimes significant slowdowns for others who could not commit/deliver waiting for that one guy to fix it... sometimes leading to rollback if the pressure was too high and the fix was not in sight. It still sounds like the commit should be pre-validated and rejected if it is bad or at least automatically rolled back after the fact but then there is a risk someone might have pulled it. Perhaps it is a cultural thing that people ought to be more tolerant to such failures? It is possible to prevent them but it is basically "gate-keeping" that runs contrary to CD as I understand it from your videos.
@@awmy3109 That would be my instinct/experience (so probably wrong as per the video) but it is against the CD workflow described in this and other videos. There should be no gatekeeping, no branching, no pull requests. Commit directly to "main" that triggers the pipeline. If it fails it somehow has be fixed quickly or rolled back (manually? by whom?). Which is odd because what people usually hate most is when something unrelated to their work like other's work break them. I know CD is supposed to solve exactly that but I must be missing something here. People will break others all the time and the solution is that it is short lived breakage? Or that it does not happen because of something else? Tests definitely help a lot but oftentimes they cannot be run locally or not all of them... I mean I physically cannot run 400 dockers on my machine to test that my change to a core library did not break any of them. CI will do that but that happens after I commit. Or as I do it today and as you describe I do it via "feature/temp/whatever" branch and it will run on that as I open a PR. Or I commit directly to main and pray. :-)
Thank you for being clearly and giving evidence. Only one point bothers me. Code reviews are not about mistrust. Its more about the different professional points of view, discussion and feedback - asynchronous and documented. But to be fair, you can do every of these aspects synchronously by pair programming - and this isnt about mistrust, too. (maybe I cant get the difference of mistrust, untrust and suspiciousness, as a non native speaker. I mean not to trust the result of work)
Agree code review is not about mistrust, or shouldn’t be. As you say, pair programming is also not about mistrust. I have never seen code review that worked better than pair programming because it was more independent. That is dimply not a problem that I have ever seen, I have seen people catch mistakes in code review, but in my experience pair-programming catches more mistakes. My suspicion is that the “independence of code-review” is an after-the-fact excuse rather than a real effect - its a guess because people don’t like the idea of pair programming. Having tried both, many times, pair-programming has always worked much better for me snd my teams.
@@ContinuousDelivery And here is another fallacy: code reviews are not mainly about catching mistakes (although it doesn't hurt when it happens), but about sharing knowledge and ideas how to write code better (aka: refactorings). Also doing code reviews doesn't imply that "people don't like the idea of pair programming". One doesn't exclude the other.
I agree with most of the things said here. But one thing that CI proponents never seem to acknowledge is that it is difficult to impossible to coordinate all the different features and tasks into a release schedule. Features can span multiple iterations. Some claim that all code can be released every day to production, but in the real world that is ridiculous. If you do ci you have to wait until everything is a state to be released, and that is unacceptable in a lot of cases.
Needed to hear this to scoop some motivation to continue pursuing automated testing in the builds I contribute too: "The impact of designing for testability on the quality of our code is profound...testable code is more modular, more cohesive, has better separation of concerns, it also hides information better and is loosely coupled. All properties of high quality code" - D Farley
This man is a truly good influencer in software development. He really knows what is talking about. Not like other influencers in youtube that give bad advise to beginners.
I don't disagree at all, but to reduce the issue to "this is how developers should work" ignores the fact that waterfall comes from a corporate mindset. I have worked on some very successful projects in the past and features of those that stand out are: - Empower developers to make the decisions - Continuous user involvement - (Obviously) Good automated testing that is constantly kept up to date Oh, and no branching. I have also worked in places where the requirements need to go to one committee to get sign-off, then the architecture/design needs to go to a different committee, the implementation plan to another and so-on. Where users aren't given time to participate in the development process. A lot of this is done in the name of "compliance" and "we work in a regulated industry". Until people on the coal-face are empowered to do things quickly and do them well, those companies are doomed to waste millions on delivering bad solutions late.
Well, ironically, the guy who "invented" waterfall, intended it as an exercise in what *not* to do, but like Scrum, management picked it up and ran with it since they're always in need of silver bullet solutions.
@@dauchande As St Fred said, there are no silver bullets. Actually, I would say that there is one: if you have good people (user and dev) who have the space and the freedom to create good stuff, then they will. If PMs want to dress it up as waterfall or DSDM or Agile or whatever, it makes no difference. If you don't have good people and/or you have a horrible bureaucracy, well then you're fucked from the get-go.
I think it is good to not only look for evidence that supports your claim but also in opposition. It helps develop ones gut feeling but also makes it harder to miss some obvious benefit or problem one didn't think of.
Me too, that is how we arrived at Continuous Delivery, by trying lots of things that didn’t work, and being hyper-critical of every activity or approach. I still try to be, but these days find it more difficult to find the holes in the approach. If you point any out, based on evidence of better explanations, I’ll be very grateful.
PR reviews arent about not trusting the dev, they should be about having a second set of eyes to catch something the dev might have missed (no matter how experienced and trustworthy they are), and about giving other devs the opportunity to see parts of the code they haven't worked on before. Pair programming just isn't realistic for most teams most of the time.
No, that is just a code-review. The PR was invented for open source projects. Git was written by Linus Torvalds, creator of Linux. Incidentally, Linus says merge changes frequently to avoid problems!
@@ContinuousDelivery Linus also says you should be able to create branches all the time without even thinking ab out it, which is one of the reasons he created git.
Where I work pair programming is viewed as wasting developer time, which is sad. Even when I try, like today, to garner feedback I know the person is not able to dedicate the time needed to really understand the change I am trying to make. Great content by the way.
"Continuous Integration" is term a devised by Grady Booch in the early 90's. It did not mean daily merging. XP came years later, and promoted the idea of the daily (or more) merge. Both predate modern branching as implemented in Git (by many many years). "Merge Hell" practically disappeared with git and the practice of shorter dev cycles. Most developers have never actually experienced Merge Hell, let alone from feature branching, yet use it as a major argument for daily merges. Whether XP's accelerated approach is better or not, it does not change the definition of the term. So let's all stop saying branching longer than a day can not be Continuous Integration. One may claim it is sub-optimal, but it can still be Continuous Integration. Multiple devs working in an overlapping area of code requires adding noise and complication to the code base if they are to integrate their work when it is in an unfinished state. Devs hiding their unfinished code behind flags, or by "branch by abstraction", are still hiding their code from the other's execution path - which makes their "integration" just theater. Theater that comes at an expense.
Sure, but all of the modern definition of CI that I am familiar with include "at least once per day". Sure, the Git tools improved merging, but there remain many teams that suffer. I consulted with an org that had, rather blindly, split their dev team (formerly monolithic team working on monolithic code) into many small "feature teams" because "that is what Spotify do". The teams found that they kept breaking things, so they pulled the code that they were responsible for into a series of separate branches. I met them 18 moths after this event, and their code had never compiled together since then. So it is probably better, but merge-hell is certainly still a common pain for many many teams. I see a more modern equivalent of it constantly in teams that claim to be practicing "microservices" these teams have each service in a separate repo (just another form of branch) and then spend a never ending fight to find a collective change-set-of-services that work together. There is a difference between the information hiding in feature-flags, branch-by-abstraction, dark-launching etc, and that is that the "branch" is at the level of behaviour, not source code. That means very different things in terms of managing change across the code base. Hiding information in source code branches is a bigger barrier to change and so limits refactoring more.
@@ContinuousDelivery That is rather anecdotal evidence there, about this one company. There are thousands of other companies, that use regular git branching without any issues daily. Of course we're talking about SHORT lived branches, not long lived branches. But that does not need to mean 1 day. It's rather arbitrary time period. I think the nature of your experience naturally leads you to those companies and teams, that do not function as properly, because maybe they're not as experienced, or not as good and such teams are much more likely to need your services, than teams that actually know how to do their jobs. That is certainly going to skew your view on what is happening in the industry. I am not denying, that many companies and teams have horrible practices, even thousands of them, but surely it does not represent the industry as a whole. For those in doubt, keep using your short-lived feature branches, don't force yourself to fulfil some arbitrary deadline of merging once per day, if your work is not finished. In extremely huge majority of cases you won't suffer any bad consequences and in those few cases, where some refactorings lead to merge conflicts -> just use unit tests to make sure code works also after resolving conflicts and have those tests run also after merging to develop and you'll be fine. There's plenty of data, that this approach works, because this is the approach used in thousands of companies, that deliver software daily.
@@vyli1 probably (also according to my own experience as some developers seems to have a cooperative approach but others just not have this) this way of working (CI) requires that developers also are using techniques like TDD, pair-programming etc, ie. devs are communicating a lot with each other. Having two developers working with the same task is a good way to improve cooperation in a team (not strict pairprogramming, but as a way to split work between developers, and forcing daily sessions between the devs). If devs dont communicate with each other, we will have the branching hell with the misunderstandings, and code that is stepping on other devs code, that slows the team down, which is the opposite of CI. So we should always strive to shorten the integration periods, if possible always having master in a releasable state.
I just don't even understand... does no feature take more than a day to develop? Wouldn't the most-continuously-integrated code be everyone simultaneously editing the code directly in production? I watched 15 minutes and I'm done..
I like code reviews because you get to learn what other people do and spread knowledge about the code. Pair programming is harder to do if you work in multiple locations. I’m still keen on trying it out though :)
I've heard this before. If you really think (and I do!) that learning what other people do and spreading knowledge about the code is a good thing, then instead of doing it once, after all of the decisions have been made and interesting alternatives discarded, why not do it continuously, while the code is being developed?
I've done Pair programming via video call. Just find a tool that has no lag and good resolution and you're good to go. Only problem appears when your pair lives in a region with big time gap (more/less than 3 hours can be a problem sometimes)
As a project progresses, the amount of information you have about what you need to do increases and is refined. So the project should get faster and faster as the refinements to the models are smaller and smaller. This is what I experienced in the one project in my career that was done full design first, full TDD when moving to code (the only one where I was tasked to chose the process and drive a change). In most projects people refused or ignored TDD and CI and sadly the pace was instead getting slower and slower. This was dismissed as a consequence of the code base size and complexity increasing as if it was a fatality. But when you have proper cohesion, that should not have any impact as you always work on pieces of code of reasonable size. Unfortunately, most jobs offers will land you into the last category, as teams working with efficient methods have less turnover and do not need to increase their size constantly to compensate the slowdown (at least that's my hypothesis, I lack data to defend it)
Completely agree Dave! Sometimes opinions are defined by factual events, unfortunately. A recent experience, a new manager to the team exploded one day, at his inherited highly efficient and cohesive team, because we always pushed to main. "This is bull shit," he raged, "we don't do trunk based development, we need branches, PRs, and reviews." The team was highly productive producing top quality code. Well, the team fell apart and I got the hell outta there. Yep, the manager is still there with support from his senior management (and this is a fortune 50 company.) Very solid advice! From the school of hard knocks: high quality comes from energized cohesive teams moving fast with building tests as a primary concern.
While I promote pair programming for numerous reasons, for it to actually negate the need for code review it requires one of the pair members to be "good" enough to have been a code reviewer in the first place. This is often not the case. Yes, 2 devs who are "almost that good" can level each other up and be good enough together - but this is also not terribly common in many teams.
Honestly curious: do you only allow senior (good) people to review code of "less good"? And if so, how do you determine who is good enough to review? Because this sounds highly inefficient to me. A junior reviewing my code (or pairing) will teach me (25 years of experience, still a bad programmer) a lot. Sometimes about things I take for granted. Or new insights. Often by making me explain something, requing me to think through it, better. And will teach that junior a lot too.
11:16 'There is no tradeoff between speed and quality'. This is one of most valuable lessons Iv learned as a game developer. Thats so much true. Lets not fool ourselves, thats purely how proper software dev works and we - devs need to clarify that to the management, all the project designers etc.
I would so much love to be part of a team led by Dave. My work model, day to day, is far away from the one described in the video. It's a pity that my next step in career (which is pretty close) is Tech Lead and I never experienced a development environment like this. Thank you very much for the videos.
@@ContinuousDelivery I actually did. It was another video full of very important ideas and concepts which I definitely need to take into account. Thanks !
Request: Talk about how an individual can practice CD in a team environment where CD is not a focus. If a developer works at a company that is very culturally tied to Waterfall and Code Reviews, Is it possible to practice CD on an individual basis? What strategies can engineers use to start to trickle up these ideas in hopes to get managerial buy in?
Well, you can always write your own tests and run them, even if no one else does. To practice CD you need more than that, and I don't think that you can do that alone, you need some level of team buy-in. You can certainly do some stuff to improve the quality of at least your own work though. For managerial buy-in, it is often to try an make the case in terms of productivity and quality, instead of technical reasons. I have a video meant to help with that: ua-cam.com/video/Fjde-h_wHsk/v-deo.html
I understand the positives you put forward for CI/CD, but I run into problems in practice. So if you have multiple people working on the trunk, and they are changing/modifying existing systems, then you wind up having to have a LOT of split logic so that you can feature flag off the new code until it is complete. Add to this when you are dealing with external assets that go with the code (graphics, sounds, sprite atlases), and now you're needing to maintain duplicate copies of the assets and their loading logic as well. If a new feature takes weeks to implement with lots of testing and iteration and approvals, you have multiple features being simultaneously implemented (both new work and refactors), and the order and timing of releases is fluidly determined, CI/CD becomes problematic. I think you will cut down integration issues, but at the expense of having to spend a LOT more time making sure partial functionality of in-progress systems isn't accidentally released, thus causing bugs. Also there will need to be a lot of cleanup work done throughout the code base to remove all of the feature flags and older code, again presenting an opportunity for bugs to be introduced. I'm very interested on people's thoughts and experience on this.
Yes there are downsides/weaknesses to CI/CD and that's okay. I don't believe in silver bullets, just a series of tradeoffs. For your specific example, your tests need to be aware of what feature flags are enabled, what expected outcome should be in both cases, and you will spend extra time both communicating and cleaning them up when the feature launches. Engineers can and will make mistakes on deployment configs for what feature should be enabled, which can lead to breaks/leaks etc. I think the takeaway from the data is that when you look at the entire project, on average, CI/CD (strengths and weaknesses) outperform better than most other forms of software development. Ask yourself how many engineers in your dev org would you trust to properly integrate/merge large feature branches versus how many would you trust to write a feature flag? How many better decisions could be made if everyone saw the code for every feature as it was being developed rather than right before it gets merged to master and promptly shoved out the door?
The biggest benefit of everyone using the trunk is automatically discarded by git the moment you understand that your local trunk is already a branch of the origin's local trunk. You're ALWAYS working on a branch.
@@marktroyer3128 In my situation, I'm in games. So I don't just have Engineers in the repository, I also have artists committing art assets and scenes and I have designers committing balance changes. I have Engineers adding updates to 3rd party SDKs that can bring a lot of changes, all of which needs to be tested thoroughly before release. And I can't do piecemeal releases to customers, because they require changes to be both correct/stable as well as fun. If we feature-flag changes, that can cross dozens of source files as well as configuration files and art files (which don't have a simple boolean or compile-flag switch). I'm also working with very small teams, where pair-programming would instantly halve my engineering capabilities. I'm not discounting CI/CD, just trying to work through practical solutions to make it more workable. Thx for the feedback.
@@Gokuroro True, but I think the argument for CI/CD isn't that there are no branches, but that the more often you commit back to the main branch, the less you have to deal with in the merge. Still, CI/CD assumes that the amount of development that can be committed back is simple enough to fit in a small commit, rather than changes that are more extensive changes or additions that require a lot of things to be in place before you can evaluate if the work is correct and the direction you want to move the code/project.
That's why you DON'T have Developers working on the trunk. Developers ONLY work on the Dev branch -- a SINGLE Dev branch for all Developers, mind you -- and submit their changes to it. An Alpha Tester tests the Dev branch when the Developers say it's ready for alpha-testing, then they flag the newest build that passes the tests for basic functionality. (alpha testing may be completely automated if you have a very mature development process.) Then an Integrator periodically merges the newest alpha-tested build into the trunk -- the Integrator is the ONLY person allowed to merge anything into the trunk. Then the Beta Testers get to work on testing the trunk, and flag the latest build they have tested for advanced functionality and stability. When delivery time arrives, a Deliverer pushes the latest build that has passed Beta Testing to the production environment. This can happen once a month, once a day, or once an hour if you hate your employees and want them to have nervous breakdowns from overwork.
About testing in different way than just execute with rest of the environment - it's not always best approach, because sometimes time for mocking out for example entire ECU behavior and tools used to monitor it's behavior is much more higher than just manual testing by running it. I agree with our statements a lot tho :)
Can you talk about how experimentation and prototyping works within CI? I can't wrap my head around it. It seems that if I'm going to make a major change to the codebase, I shouldn't commit those changes unless I know it is the route that we want to take. Same issue for changing APIs, etc.
Do the experiments and prototypes somewhere else, but discard the code snd keep the learning. Then re-write the code to production quality. Or make the change to production quality, and back it out later if you decide you don’t like it. Both work fine!
@@image357 Not really, because you throw the code on the branch away once you have learned what you wanted to learn, and then develop a completely new version of any code you wrote on the branch back on Trunk. The aim of the branch is to learn not to create code.
I was thinking about this a bit. Isn't dependency versioning a form of feature branching? "This software uses Version 2.4.5 of library X" is not continuous integration, but is is from my experience the de-facto standard in software engineering, because it becomes pretty much impossible to keep up with the entire ecosystem of different libraries for even a medium-sized project and always update all your dependencies to the newest and shiniest versions. As usual, I think the answer is not dogma, but that sometimes you should use CI to guarantee stability, and sometimes you lock down features and dependencies to guarantee stability (not CI). It is sort-of a project scale question and available man power - project management in other words. Anyway, interesting video... food for thought!
The song says don't chase waterfalls. I think that is valid :). Waterfall is the JIT of software development. It presumes optimization that is fits into a perfect budgeting plan.
I've been using CI/CD thoroughly in two projects I'm working on recently. It really improved my and my teams productivity, it's crazy how much time we've saved by (almost) not having to solve merge conficts and the annoying task of manually having to deploy new releases.
@SuperWhisk Ah yes, we don't feature branch. But we do have a development branch where we commit our changes to. When an issue is completed we open pull request towards the main branch then the CI will kick in.
I love the channel and the content and I am extremely grateful for the knowledge. Thank you! I also appreciate your awesome T-shirts :) ! The one in this video is wicked cool. Would be interested to know where do you source them from - I spent an entire evening searching for the exact "Pinky and The Brain" t-shirt you had some videos ago :), couldn't find it(I got a different one though).
Regarding the warterfall: My experience is (over many years), that in former times people (at customer and vendor side) were involved more exclusive and took longer time for planning. With increasing pressure on time-to-market and reducing staff (or miss to increase number of team members) planning quality got worse and worse. In my earlier projects all team members had more time on thinking about optimum solution and possible side effects. Also employees tended to stay longer and knowing better all processes and products in use. But at the same time ecosystems have grown. So today less time and less knowledge (in the sense of increased environment complexity) make it impossible to do a good planning which results in a very more "trial-and-error" way of working in general.
Regarding waterfall, I've been privileged to work on an ongoing un-ending project that has to practise waterfall for various reasons. It works tremendously well and is truly the best way for that ongoing development and (believe it or not) continuous delivery releases of the project. I will say the reason it works so well is each of the stages between the layers allow for upward feed on analysis, so if when a given work piece is fully technically analysed the findings are that the work piece has substantially changed, the engineering team has the capability of feeding that back up the chain with confidence that they will be listened to and that they have the capability of calling a stop to a workpiece for some reason. Now, I say this about this one project and full disclosure the team on that project from the top all the way down is relatively small, tightly coupled, there is no real friction between the levels and everyone has equal voice. This is not the case in many other waterfall's I've been in and I DO recognise it is unusual, but it's only fair to say :) ...I still miss working on that project :D
Excellent talk, thank you. I watched your pair programming a month or so ago, and talked my team leader into trying it out at work as we have various members with different skill set, just like to say, that we have now adopted this method after a trial period as it really worked for us. I can't recommend it enough now. Again thanks!
An abuela (grandma) kind of saying in Latinamerica goes "Despacio porque precisa", and it means roughly and verbal forms aside, "Go slowly, because it is urgent". It's often said when someone wants to forsake quality and attention to detail for quickness sake. Usually followed by an "I told you so, now do it again and do it right this time" when we the then youngsters didn't listen and rush over whatever the task at hand was.
When comparing two or more methods to achieve the same outcome, always be weary of those that always promote one method as better. In 30 years of IT I've never seen a single method always and I mean always without cons. An objective person with enough knowledge of a method should always be able to list the cons even though they may rarely apply. An example is people who always dismiss waterfall or alternatively incremental or iterative approaches.
Well I disagree with the philosophy, All ideas are not equal, if you deny climate change or conservation of energy you are wrong. I can certainly point to the "cons" of CD. It is extremely difficult to adopt, because it means that everyone's role changes to some degree and that is very challenging. But that doesn't make it equally bad as waterfall. I have taken part in many waterfall projects during the course of my career and seen many more. My observation is that when they work, and the sometimes do, they only work when people break the rules. In fact, this is what Royce was saying when in invented the term. The man that invented "waterfall" was advising against it because it is too naive. My experience of people who defend it, is that they haven't experienced the alternative, because once you have, you would not consider going back to the old, low-quality, ineffective way of doing things.
Dave, I Love your videos and the way you always present the many viewpoints about the matter. If could point one thing I didn't like much in this episode, it would be the use of too much animations. I found myself distracted by them very often. Thank you for your wise words.
Some of the subtle ones I did like, it's a fresh new look. But I also did get distracted by the blinking question mark (that also influenced the text layout) and other bigger transitions.
I understand your strong opinions against feature branching but you should mention, that it works only when the implementation can be done within a day (and it's clear how the solution looks like). There are features that can take a week or longer to implement and that's not because the feature wasn't broken to smaller parts but because it's hard to find out how the algorithm should work in the first place. It happens to me quite often that I have to use try-error approach to find the way out and sometimes I even have to give up... What are you thoughts about that?
Approach as far as agile vs waterfall and other processes I find these things to often be secondary factors. Primary factors: The computer language. The development tools. Lack of clear requirements. Lack of a strong leader Non-native speakers Not having a test environment Having coding standards such that all code released has no "flavor" Using idea such as UML, model based development (Matlab/Simulink) or some other university idea that works like crap. Not having enough test hardware or not investing in a emulator/simulator. Software people SO lets look at this in an example using 2 teams making software to control an engine. First team uses Matlab and Simulink to generate code and systems engineers are hired over software people. They have machine that barely meet the minimum requirements specified for the software. They have a general idea of the thing they need to make. There is no coding standard because the code is autogenerated. The simulation is the test but no thought has been given to integrate smaller models into a larger model. The team is all remote programmers some speak German, some French and some Chinese. All speak Mock-English. Second team uses C A specialized embedded compiler with limited debugging capabilities. They decide to write code in Visual Studio and create a test harness that simulates the hardware. The requirements are well written. Coding standard uses a formatting tool to insure all code looks as if 1 person wrote it. The team all are native English speakers. Some Americans, Some Brits and some Aussies. Team 1 will produce far less code no-matter what process you use. I could create several of these examples and the main factor of failure/success is never the process.
I didn't get your example, but I can completely agree with ... "the main factor of failure/success is never the process". Adding to that, the different theories of process in all their forms have always had that one single problem - When theory meets reality of .. well the imperfection of people (as a broad term). The stricter and more unflexible a process is, the worse it will suffer from the meeting with reality (people); not to mention as soon as the person who pushed the strict process has moved on (and those kind of people move on/up fast), that process will likely stop being efficient because of "the imperfection of people".
i thought that your video was good but the example about the pull request approach has me a bit confused because linus torvald's said that it works on a "network of trust" ie he only pulls from people who he views as "not morons" and they do the same but I guess you aren't talking about open source projects and, actually what you said makes a lot of sense and I like the idea of pair programming because it seems like a sensible and mature approach. I liked everything you said then... I was just being a bit slow. On a side note, one of the most important things I took from the video is that someone's own experience isn't enough to convince anyone else of anything and so you need to use science but I don't believe that anyone sensible realistically goes science first and experience second when forming their opinion. I think it always goes experience first and then using science to convince others. The thing about opinions is that they are not all equal (as you pointed out) and some people become wiser from their experiences than others and people who tend to have good opinions I think are always doubting their opinions and trying other things to see what the result is and are tempted to find the breaking point if applicable (when they don't have to) but this takes more time ie it takes more time to form a good opinion than to not. When I write code I always get curious about whether this is really the best way to do it and I spend a lot of time sometimes accomplishing nothing out of curiosity. People who are very fast at coding and who don't tend to care about the truth of the matter to the same extent do sometimes seem very good at getting things done quickly and I think often, if there is a problem, they maybe just need a strong hand from someone like yourself, who has good opinions, to get the most out of them. I think another way to think of "opinions" is judgement ie how good is someone's judgement? Solid opinions are formed over a long time and judgements aren't necessarily but they're still closely related. Some people's judgement seems cr*p (too hair trigger/not enough taken into account or even worse: they are weirdly bias ie they're not being very objective), some people have better judgement. I have pretty poor judgement about anything that I don't _really_ know about (so not about a huge amount). When people have good judgment and their opinion's become more solid over time then those people are special IMO because they now have enough experience to be pretty sure that they are right and that makes a big difference. I mean, if someone with little experience reads something by someone like that and then reads something by someone else that is completely contrary then how are they meant to know with confidence what direction to go in? How can they interject themselves and their chosen direction on others when they have no real weight/confidence behind their own opinions - even if they listened to someone like you and they think they are right? (and they probably shouldn't try to). People with experience are special when they have good opinions (not someone like me, I don't have much experience). That's from my own observation and it is just my opinion that may or may not be correct. Opinions always change when something contrary to them happens, even if it's just an exception, it still changes them. Sometimes it can be painful. I think that's one of the biggest misconceptions in our society ie that someone can just read a book by someone else, who has all the experience and who has well formed opinions and can be anything near as good as them by simply reading it. It might happen now and again I suppose but I think, in practice, mostly, that there aren't really any shortcuts ie i'm not interested in reading your book but I would hire you if I were in a position to do so and if it made sense and if you were willing - to lead a project. That's how people who are old hands (ie experienced), such as yourself, tend to make me feel ie I know I can't become you by reading your book. I can just listen to what you have to say and have an opinion but really I know I need to figure things out for myself. Otherwise I'll never be sure about anything ie if you say continuous delivery is good I'll give you the benefit of the doubt but truthfully - without having the experience myself : I don't know sh*t. I also think that because of that fallacy : sometimes people who are experienced and who are good at what they do (from experience) aren't always as valued in a company as they should be - they actually aren't necessarily all that easy to replace. Edit. there's a guy called "Uncle Bob" who goes around lecturing people about programming and he talks about how young programers these days aren't mature enough, how they don't do things the right way but I know they just need experience. They need to walk the path of fire that everyone like yourself and Uncle Bob has walked. _That's_ what they _actually_ need... and it does involve some pain because it's a painful path. It always is IMO.
16:00 One use case where I find pull requests quite effective is working on a feature branch in a remote team. I like to push my code to a feature branch before it is complete and open a "not ready yet" pull request as a way to 1) easily visualise my changes to the existing codebase 2) easily reason on my code with a colleague who for example is not physically there (e.g. in a different continent or a different timezone)
On "you cannot do serious work without feature branching": I worked on a government census project in Germany for the first 5 years of by working live. We where using Subversion (SVN) at the time. And branching in SVN was huge pain, borderline not working. So there where basically no feature branches. (Maybe two or three on top of my head...) It worked and we delivered. So are you saying this just was not "serious work"?
15:40 Sure, most people would prefer to work with skilled people. In reality many teams have members who are at very early stages of learning their trade and/or the tools they're using. Pair programming could solve the problem in a the team where majority of developers are skilled. The question is how to work with teams where the majority of developers are still unskilled and can not yet deliver code that passes even a modest quality gate.
I often find myself in the same environment I wish Dave was able to address this in one of the videos! Hope this comment gets more attention! I've sent these and other videos to those unskilled developers you mention and it changed nothing in their approach - I find it very difficult to communicate with such people.
Problem with some of this is that you can say 'I did X and achieved Y', or even find positive correlation between a practice and outcome, but it doesn't say much about whether X causes Y. To do this is little more than cargo cult thinking, and some companies do this; setting up practises which appear to follow the form of Agile while fundamentally missing the philosophy. What philosophy? Of you read Selfish Gene by Richard Dawkins you can see how evolution works through small changes, selection and iteration. All the unit tests and continuous deployment in the world won't help if users are not able to provide effective feedback and sprints are preplanned months in advance. I'm seeing 'agile' become empty; just a new wrapper around the old heavy up front planning model.
At a job I worked at many years ago, they decided to become "more agile" by amongst other things, ditching trunk development and switching to feature branches that were merged once a month by someone outside the development team.... note my ironic use of "more agile".
To play the Devil's Advocate for your point at 8:34 , you haven't necessarily established a causal link. It may be that the devs who can keep their branches short and well integrated are simply _better in the first place_ , and so would be producing "better software faster" regardless of the system they used.
@@HansLemurson you’re welcome to replicate our results. I’ve several years working with many teams in a very large enterprise seeing the same measurable outcomes.
I am a tech lead in a mid scale game development studio, I worked in AAA as well, while these tools help and the big studios specially do use them, if you follow recent news about AAA game releases you'll notice a trend the last 5 years, catastrophically broken releases, to the point of legally mandated refunds, there are many reasons of course, like bad management and unrealistic deadlines, however one of the factors in my view is that CI and TTD apply very well to problems with clearly defined (or definable, after some iteration) success conditions, this is great for technology, the engine, library integrations, OS support, etc. However is not well suited to behaviors that are clearly wrong but not in a way the software itself can figure out (there is some experimentation on machine learning for this, but its still a bit green and unaffordable for the medium or smaller studios). Another component is the myth of the theoretical designer, similar to the idea guy, (this is being pushed against, but for now still a big problem), the perception that the designer defines how the game should behave and iterates on those ideas and its up to programmers to make that behavior take place, the programming community is steering towards content driven behaviors, giving tools to designers and content creators to generate the behavior instead of coding it directly, this is far more testable, but the role of content creation and the responsibility of designers to dirty their hands with it are still not fully recognized by the design and production communities. Until this is fixed, we are in a situation where the only pass condition on a lot of the game code is the opinion of the designers, I wish we could automate that :P Another huge factor is programmer training but that is beyond the scope of this discussion, though pair programming was mentioned.
I think that is a misunderstanding of SW dev in other fields. The destination is no more known than for a game. I do think that games bring some extra challenges, but they are different, not more extreme than in other fields. Writing software for medical devices, cars, or space craft bring different challenges too. Some of them at least equally difficult. I think this is a cultural thing in the games industry (and lots of others) we all tend to think that our version off SW dev is a special case, and it probably is, but I don't think it rules out these practices, they work in all of these fields. TDD in particular is even more important when the destination is unclear, because it gives you the chance to design SW that can change when you change your mind.
@@ContinuousDelivery I did not mean to imply these tools do not apply, they do, they help a lot, but they often get blamed for not fixing the aspects of the general problem that they are not meant to fix at all, when the symptoms do manifest in the software, which is supposed to be their domain. I have written (or worked in teams that did) extensive TDD for collision testing, data serialization, but writing a test to validate subjective game design choices is so far beyond my capabilities, its not when the objective is unclear that I don't think TDD helps as much, its when the state of failure is subjective, regardless of if it will be your definitive objective or change soon enough. For these scenarios I believe a better solution is data visualization tools, for designers and testers to better understand the meaning and behavior of values. Another big problem is testing external libraries, like a physics engine, you use the library to avoid having to solve that particular problem, therefore you don't invest in becoming familiar with its inner workings, therefore you are ill qualified to write tests for it, even for its use, its not impossible, and in a long term it should be done, but its an extra layer of difficulty.
9:04 I'm the only physical person working on the file, and different branches are different parts of the program that may be incomplete. So merging them each day(don't even make progress each day) would jumble things together that aren't even connected, and are independent of eachother. (ADHD, I bounce from part to part) In this case, wouldn't it be better to wait to merge, so you don't end up with one master that has a ton of incomplete projects with unusable code muddling everything up? Though, tbh, I'm still in the setup phase, so everything is being pushed directly to master. 15:06 Yep, this is how my "team" works. There is no trust between the member(singular) of the team.
My principal issue with CD is the 'organic' growth of the code, commiting quickly mean commiting partial work and patching around to make it work. in the end the systems look like those old 1 room house that get extended by adding room, you see some in rural area.. until some one decide to just raze it and build a mansion..
Waterfall development was required by ISO software certification ;) Not sure if waterfall is still in ISO, but ISO is usually required in medical software development. The thing with waterfall and it's advantage over agile is that it is much more controllable and has distinguishable, planned steps. Not saying that waterfall is better, but just that it gives more control and is better planned (and agile really sucks at planning ;) ). In some cases, were planning and predictability is crucial, waterfall is the way to go - and I think waterfall can be done reliably and good, but it requires very good developers at the team that really know what they are doing, while agile doesn't have such high requirements. When I am doing some project by myself, and I am doing it with tools that I know perfectly and I am doing things that I know perfectly how to do - I don't play in agile, I am going straight into waterfall and I get the proper documentation, proper plan, everything goes well from start to finish and everything is within exact time estimates. In such cases agile would be just a waste of time and quality.
In most regulatory frameworks I have seen, waterfall is kind of assumed, but not mandated. I can't remember specifics of ISO, but usually you can deliver what is being asked for in a better way.
Reason number 8: Failure to protect the customer from themselves. A personal mantra I developed over my career: As a Software Engineer, your first priority should be to defend the system against the good intentions of the customer. Customer in this case, being who ever is asking for the development to be done. They are the first to fall for buzzwords, the first to demand illogical and useless features, the first to ignore business process and ask for alterations that would kill their own business. Protect your customer from themselves. It saves them time, money and helps their business stay alive. By extension, it keeps you employed.
How would you go about enforcing design patterns the team has agreed upon without a code review process? I.e. I am an SDET. One pattern we follow in our end-to-end test suites is the Feature -> Test Step -> Page Object pattern. It would be easy to not follow that established pattern and can lead to a confusing code base if the pattern is not followed. EDIT: I see now that you would probably recommend pair programming. Unfortunately, in my current job this would mean pairing up with people in timezones 12 hours away; not something that can be easily done. The kind of synchronicity required for pair programming is not something I think we'd be able to implement, unless I'm missing something obvious.
In the past I have worked on teams that wrote static analysis tests to assert this kind of thing. One of my colleagues wrote this, Freud (analysis for code) to help create tests like this github.com/LMAX-Exchange/freud
Waterfall projects have very low efficiency but if its really required it can be done right but you have to expect to spend the largest part of the time in the early stages to actually get them right, probably often several thousands or even tens of thousands of percent more than what many people imagine. The obvious downside is of course that it is hard to measure how well a project is going until things start to fall in place and no one really wants to work several years on a specification that fails real world tests immediately. Avoiding waterfall in any situation where it can be avoided should definitely be the first choice.
Hard to pair program though when developers in a team are all in different timezones and don't share work hours. Not sure if there's a better solution in this kind of team than PRs
I think in this case PRs are good. When working remotely there should always be some overlap so you can talk to your teammates. Having worked in such an environment, we put much more focus on design. One person always made a proposal and then discussed it with the other one. This allowed for short effective sessions. And when it came to writing the code everyone involved knew what to expect which made reviews much smoother.
Goes from a very evidence based critique of practices to an opinion based critique of code mainlining without a pause. Google does have code reviews, which are indistinguishable from PRs, despite holding up Google as an example of a large codebase that does not use them (I guess to fit his narrative).
That is a conscious choice asynchronous "teams" and companies make: you cannot work together properly, and will have to optimize for parallel work and more upfront planning and coordination, as opposed to less planning and coordination required for synchronous teams.
@@ottorask7676 Seems it's more often an unconscious choice. It's generally treated has having no impact, when, in fact, it has a very significant impact. And so teams are built remotely, and the recognition that this will mean more effort put into planning, more effort put into communication, etc, is lacking.
Yep, the more changes you make over a longer time, the easier it is to integrate them back together. That's how we know that if we introduce a koala to a polar bear they will breed to make the perfect vegetarian predator. Works every time. Well, except for the times we've tried it, but we can all believe what we want!
I do have one question about Code Ownership: how do you avoid that, and more specifically, how do you avoid it in small teams delivering reasonably-sized projects? No one can be an expert in everything. An Android front-end developer might tweak a thing or two in the back-end service, but they are very unlikely to make meaningful changes, and then back-end developer might be able to maybe tweak some things in the Android front-end app, but will be very unlikely to deliver a full feature. So unless you define "a project" as "the backend service" and "Android App" (so all team members at least share the same problem area), you do end up in a situation where people rarely touch each other's code.
The real problem is this: Do TDD, CI and Clean Code is HARD at the beginning and to be effective you have to practice a lot! So a lot of programmers just say “Oh! This does not work in real life! Stop it!”. But it’s just laziness. I start to do TDD and I’m going really fast. I don’t spend much time on manual tests because all my tests are solid. So I can just run the tests without running the entire program! But it’s really hard to try to explain how I go fast when I do TDD. Anyway, nice video! Thanks!
Yes, I think that is true. I wish that when children were taught their fist lines of code, it was in the context of TDD - we'd grow much better programmers if that was what everyone believed "coding" to mean. When you learn Maths, you have to "show your working" when you learn to code "show your working with tests".
How do you do effective peer programming remotely? Sometimes asynchronicity is nice. Another problem is you cannot impose peer programming if it is not in the culture or the company, or at least on your team. So you might not have a choice. I wonder if there is anything that can be done on those situations to improve things. What if you are programming alone on a personal project? I guess that is raw TDD by yourself right? What happens on big changes that are not easy to split? Those changes that break a lot of things at once, like a language or framework version upgrade on a large codebase in which the code is only partially covered by tests. Specially when that language is an interpreted one, so there is no compiler to help you discover issues on code that is not covered by tests.
I’d be interested in thoughts about software systems that have hardware in the loop, or hardware at the end node if you will for example IOT devices, vehicles,… Autonomous vehicles… How do you wrap the last layer in an automated test?
Modular architecture, use "ports and adapters" at the edges so that you can do the vast majority of testing in simulation. This allows the HW to change and not break the SW (only the adapter) and the SW to be developed before the HW exists. This approach is difficult in hardware only to the degree that you allow hardware induced concurrency to "leak" into the software, so design the SW to manage the concurrency. Designing the SW to be async, helps a lot with this strategy.
I have personally found that Unit Testing has been incredibly useful NOW at catching all the "null/starting" state things a human mind can miss - to often I find I develop code as though the system is "mid state" - (is it just me? Maybe!) I have found it really helps focus SOLID principles (esp the D and S). And finally, most (useful) software is sufficiently complex that we cannot predict all they interplay, and UT is a solid first step to help manage that.
It is an interesting approach, would you have an hypothetical example of developing as if the system is "mid state" ? Do you mean like if you could code unit tests to be run in a live system? I can see it for a stateful Class instance, but not for a pure function, so I suppose it is rather for OOP?
@@rafeu2288 All I mean is that I often forget to properly account for an "empty" system - probably because most code I've ever written has been into existing systems. If it weren't for unit tests - my code would break the system when "starting with no data"!
The testing ideal you've described sounds great, but my reality is different. We'll make a change, deploy to test, then perform some ad-hoc testing the change behaves as expected. Then we'll run the automated tests which have years of investment, of which a bunch will fail. We'll dutifully work through these, but 99% percent of the time we'll end up changing the test, not the change we're introducing. In my mind when the tests have these false-positive results they have failed to provide value, and are instead a liability in terms of the effort to 'fix' them. No one around me though seems to share this view. Is it just me? am I going crazy? there has to be a better way.
What you describe is a common symptom of bad tests. One of the causes of this problem is writing the tests after the code is finished rather than writing the tests before. Writing tests afterwards means that you end up testing that the code you wrote is the code you wrote. The tests are tightly-coupled to the code, or system, that you are testing. Writing the tests first tends to make you focus more on the outcome you are trying to achieve. This is a good thing, because desirable outcomes are much more durable than solutions. This means that the tests are less likely to be "wrong" and so are better at telling you the boundaries within which you can safely change the code. If you have tests like these, and you change the tests, you are changing what the SW is supposed to do. I'd start by trying to introduce some tests that look more like these "outcome" focused tests for new work, and maybe that cover key behaviours of your system. Then decide wether it is worth re-working what you have now, or dumping them and replacing them with better, outcome (behaviour) focused tests.
Dave, how would you respond to this slightly changed statement from the presentation: "These ideas might work one simple web apps (or the greenfield) but not on my huge legacy system that is 20y old, a mess and needs to still be changed because a bank/government branch/insurance etc depend on it." I have seen such code bases. Lots of devs (30-40) that commit into a codebase that is basicly a "ball of mud" where everything "could" destroy anything and where you have to test the whole system at once because of that. (No separate modules) And where therefore testing takes hours and red tests are hard to trace back to a single commit. How do you takle a situation like this. What is the strategy out of those vicious cycles into a mode where you can actually implement all those strategies you describe. (which I would have prefered when I was there)
Most (not all) big orgs that practice CD now, started from where you describe. It is rarer to begin with a green field. Size of codebase is not an issue, you can do this with very big repos and codebases. The problem is the culture, the poor testability of the system and sometimes (reasonably often) very inefficient deployment - if it takes 2 hrs to deploy your code, you won't be able to build test and deploy it in an hour! So you work to optimise these things. Usually starting with eliminating manual regression testing, which quickly leads you to automating config management and deployment.
@@ContinuousDelivery thanks for the answer. That specific system has already a comprehensive automatic test suite. The integrative tests need >=1.5h. The e2e tests with the ui need the whole night and a fleet of computers. Problem is the "big ball of mud" : the business processes run throw nearly all parts of the system and the shared data model increases the side effects even more. Thus, you never know if you destroyed accidentally a remote part of the application. And testing after each tiny step is prohibited by cycle time( and resources needed.) Defining and enforcing modules/components in such a system would be my first guess to move forward, but such refactorings are costly and compete with the cry of the business for features (also some business features are legal requirements) I would be interested in tips about how to untangle such a mess.
Hey Dave, great talk! Thank you for sharing your wisdom. Quick question: how would you solve the pair programming problem on a team whose members are spread across multiple locations? My team has contractors located in several different US states, and most of my teammates are deathly afraid of abandoning our PR process for code review.
Remote pairing works very well, all you need is the ability to share a screen and a shared repo so you can hand-over control on commit. Convincing people is the difficulty.
The only times I write sub-optimal programs is when I am forced to work in sub-optimal ways (DevOps, Agile, Scrum, to name a few). I get the best results if I can make my own decisions (languages, libraries, frameworks, methodology), but unfortunately that's not always possible.
You say that you're against feature branching, but then you say as long as you're merging into master at least once per day. I'm guessing this means you prefer trunk based development, but you think it works fine if someone wanted to go with a workflow of feature branching, but doing a PR to merge into master at least once a day? I could see me being able to convince my team to do feature branching with daily PR's, but I can't see them doing trunk based development, so I'm wondering if you could see any issues with that approach. I also think it would be nice to maybe explain how to handle schema changes with CI in a future video (I had a look, but I couldn't see any related to this currently). Handling code changes with feature switches seems reasonably easy to do, but I can't quite understand how managing Db Schema changes should be handled with CI.
I have listened to several of your talks on this channel... but this one is remarkable because it provides me with all the arguments to convince my own team to change the way we develop our software. Automated testing and refactoring are difficult to introduce in a team when the software has already been written without these practices in mind. It's kind of difficult to add tests to an existing software because it hasn't been designed for testability in the first place. Anyway... thanks for this talk !! It will help me.
Onboarding and knowledge gaps in self taught drop band on computer science degree. Standardized interview process, very clear road map. No discrimination for fluent English.
On the topic of CI vs feature branching. I am more inclined to agree with CI after some experience in most cases. Are there any uses for branches as a tool of git that we could use in our workflow? One example i can think of when we need to update the one module, and we are not sure if we want to keep the changes or not. And duplicating module and changing another wouldn't work because module is used by other modules.
This seems like a good approach and I really want to try implementing it but I have a question. Let's say 100 people are implementing features and 10 people don't finish. Their unfinished commits are still on master but the team has to push a new release. What happens then?
Unfinished code (meaning new features or a replacement feature) should be hidden behind some kind of config. Small refactorings and minor changes don't need flags but should be covered by automated tests. Early in development we hide it behind a compile/packaging-time flag within our build scripts. Later on we convert it into a deployment-time flag (or maybe even some kind of dynamic plugin-like config) so we can have different behaviors in different environment and test with/without the new features. When the feature is in "open beta" we change the flag to a runtime/properties/preference/opt-in/something config. Whether this is only done in the UI or something deeper down in the system depends on the kind of change. Now, I really dislike the question "Let's say 100 people are implementing features and 10 people don't finish" because it implies everyone works alone and one single person having a bad week can make the entire thing fall down. Talk about bad leadership and development practices. Either the entire team fails or succeeds. If 90 persons are "done" and 10 aren't; what the hell did those other 90 people do? Sat around fiddling because they "did their part" and already gotten the gold star from the boss? They didn't help? **It's not the 10 people that got stuck fault some plan went bad, it's the other 90 people + the leadership that needs to get their heads straight.**
@@ddanielsandberg I’m on a three person team. We failed the sprint goal because I couldn’t deliver what I committed to. Could be my fault for not speaking up early when I felt like I was stuck. Man did I feel bad.
I can almost agree about all of this. Especially of doing data based reasoning. And when he mentioned it's possible to do Code Reviews without PRs (or what he probably means more generally: feature branches) I got interested. But then he sets up the straw man that code reviews are about mistrust in team and starts attacking it on that basis. Then he concludes that you don't need code reviews because they can be replaced by pair programming. The reasoning? Because the data shows pair programming leads to better software, but that wasn't questioned in the first place. The data also shows that code reviews lead to better software and certainly they do that in combination with pair programming, so why should either one be expendable? He doesn't give any data on this though. So in the end I still don't know how to do Code Reviews without PRs .
It was funny to watch how a team (which uses CI) plans for changes rollback after they discovered the stuff can't be finished in time instead of just not merging a feature branch
Thanks for your awesome content 👏 I just had a look at State of DevOps report for 2021 and in the pdf they're talking a lot about the "Platform model", is it something that would be a good topic for another video?
Man, I wish I worked at any company that works this way. Here in the real world, we have massive, 25 year old code piles that were developed in an ad hoc way with what ever the technology du jour was at the time it was written, in several languages at the same time that are incompatible with each other and given zero time to fix technical debt or even get a consensus on what we're building. Even the actual build system is so complex and brittle that any change means potential days of work. But sure, *I'm* a bad programmer.
A couple of comments... First, thanks Dave for these videos. I first learned about and used XP 21 years ago. I had just left a massive waterfall project that integrated code every... well, once about a week before a release. Needless to say, CI made perfect sense to me. 😂 The other XP practices all resonated as well, although TDD did bend my mind for a couple of weeks while I learned how to do it. Second, you hit the nail on the head about the Pull Request process being created for distributed open source projects. What gets me, though, is that no one ever seems to see the latency in that process as comments go back & forth. IME, developer's will submit a PR and then start something else, only to be forced to context switch when the first comments arrive. There's also an aspect of the sunk cost fallacy at times because a developer has potentially poured so much work into the code before the PR is ever submitted that they will naturally be more defensive about criticism. Hell, I felt it myself the first time I worked in an environment like that! All in all, a great video and I agree wholeheartedly with all of your points!
Sorry to be the cynic, but none of this matters in 99% of work places. All of these things get lip service, you join the team, then you look at the source base and nary a test is to be seen. The CI/CD servers have cobwebs and their reports are garbage. "We don't have time to stop for gas," is the motto.
This led me to a tangential thought. I think a lot of management know that Agile is fab and waterfall is bad, but only on the basis that they understand waterfall, understanding of Agile is often as deep as the name itself and its a great name for a development process. If it was called "Incremental" lets say i'm sure some management would not subscribe to it. The crunch comes when projects are implemented in an Agile way but the expectation remains of 100% completion of the original concept (plus months of additions and tweaks) as if it's waterfall++.
Thank you for your video, it was very helpful. I've been watching this channel and reading about CD for about a year. This is the first time I heard you mentioning working with PRs in an open source development. This is exactly my case. I hope you have more suggestions, because those you mentioned are infeasible for me. First of all, in my case, noone is trusted to push commits unilaterally. This includes myself, who made this decision. Pair programming with one senior developer present wouldn't work for logistical reasons (e.g. timezones and availability). Historically, some problems were only spotted when more than one person was doing the review. And there are some requirements I don't know how to automate, for example logical grouping of changes into commits. The way I try to address these problems, while using PRs, is to provide training to more junior developers, doing the code reviews in group, and having a script for automatic rebasing of PRs (only fast-forward merges of PRs are allowed).
Is there an alternative way to do trunk-based development without pair-programming? It seems trunk-based requires pair-programming when you have juniors in your team.
Yes, you can organise reviews differently and still do TBD, but it is not as good as pair programming. How are you training your juniors? Pair programming is by far the best way to fo this, it will get them up to speed MUCH faster. If you really can’t do it, then just have someone monitor the juniors commits and review at that point, reverting if necessary (and probably then pairing!)
2:58 Now i understand why waterfall is not very popular and even why a waterfall methodology project would have troubles, that said there are projects and some that could only use a waterfall model. Especially when making a safety critical system that is tightly coupled to a hardware. Hardware development can be slow and costly. Depending what we are trying to create. If we are for example trying to create a software system to control a car engine, brakes and other vital components, that are safety critical, we do have specifications at the beginning of a project and very rarely these specifications change during the project. As such agile methodology here does not make sense. You could in theory deliver parts of the functionality for the customer = car manufacturer but this is only useful for the purpose of testing / validation and not actually usable in a real product. Imagine having a car with firmware able to control brakes and missing engine control. So yes there are cases where waterfall model is preferred. After all - It would not exist if it was completely useless. However im open to be proven wrong.
Even for those kinds of systems waterfall is now pretty much discredited, it is not how the two biggest car manufacturers in the world make cars for example, including all of the software that controls them. It is counter-intuitive, but the data says that it is even more important to work in small steps for safety-critical systems than for others.
@@ContinuousDelivery Hmmm maybe this topic is worth a video explanation that goes deeper? Because i think i see a point delivering parts of work on a constant basis in order to keep momentum and quality... but i do not see a point delivering a half finished product to customer?
I get frustrated when most programmers focus too much on technical details. Sure, I love the tech and how to express stuff in code, but the most important stuff is the problem that we are gonna solve, and the behavior needed. Software developers tend to create a lot of technical complexity and subsequently technical debt. Hence we need structure, like that which DDD gives us. But yet, I have encountered some resistance to the idea "That doesn't work. You cannot map reality directly to code. We have to create all these classes.". You can for the most part. It just required the skills and right perspective.
I am not a professional programmer, and CI/CD is outside the bounds of my experience. I will say, that having been trying to get my head around TDD recently, I was immensely frustrated initially at having to stop and scratch my head over how to test something when my brain was already automatically working out implementation details. However, having tried to stick with it, I'm repeatedly struck by how many bugs are caught immediately and corrected effortlessly when a unit test unexpectedly fails, and how quickly I wind up back in the debugging quagmire when I succumb to the temptation to forego the tests and 'just get this feature working quickly'. This has become one of my favourite channels very quickly.
Thanks 😎
I think TDD is one of those concepts that is very easy for a Dev new to it to overthink and struggle to get their head around; I know I did until I worked my first proper job and saw how they'd implemented Unit Tests and I was taken back by how simple they were.
The simplest test to start with is an expected input with the expected output, then expand that to a few invalid inputs with their expected failure handlings.
And if you think that set of tests will be covering too much code then it's your clue to split your function into sub-functions, and write more granular tests for them
@@andrewkenworthy7439 This is my experience too. But unfortunately, often an argument to not do unit tests. Summarized in "They are too simple, why have them at all". Or as a colleague recently put it "your unit tests ()for some authorization service) are really only testing setters and getters and an occasional condition. I think they are waste"
I could not explain that the code was this simple *because of* TDD. That this entire piece of software turned out "just getters and setters and an occasional condition" because of hours of TDD, because of hundreds of lines of tests that never made the final commit.
So yes. TDD is simple. But mostly because it forces you to keep things simple: a circular dependency, in a way.😀
@@berkes I haven't found TDD (or BDD for that matter) to be universally applicable. For some kinds of features, it's very easy to do it the TDD way (anything algorithm related works well IMO). What seems to be universally applicable is the need for tests, so whether you write them before your code or after, doesn't matter that much to me, just that you write them.
@@dauchande BDD is best for describing user journeys so doesn't apply to things like interface testing, but what is the case for TDD not working?
Very interesting that there is now data to show quality and delivery speed are synergistic not antagonistic. I am often telling my team "we need to deliver this fast, so we can't afford to do anything shoddy, otherwise we WILL be late". I guess bitter experience has taught me that the only way to hit a hard deadline is to do things right. Sacrifice quality for haste, and your product will be both late AND crap.
Yes!
My previous PM scolded at us a couple of times when we said we needed time to create more unit testing for more coverage and refactoring due to the removal of some requirement that was no longer necessary.
She didnt understand that this was part of our Definition of done so everytime tickets are not done at the end of a sprint she'd be pissed of and ask why why why we said we cant consider it done without proper test coverage... eventhough we have shown demos of the application being fully functional it's not done until all testing is done.
glad we dont have her as PM anymore. As a junior it was always a feeling like It was my fault cause she made me feel as if I was dragging the team down. luckily the seniors later on told me she wasn't a good PM and this video confirmed I was right about testing.
Additionally, if you deliver quickly, you can iterate more quickly, and refine more quickly. You buy yourself more time to improve.
Lol late and crap! Yeah not the product you want to deliver.
I will compromise if I face a hard deadline within the next two days or so, but I really try to avoid it. However my ability to comprimise once in a while is dependent on me really keeping my code clean and well structured at all other times, otherwise my code would turn into a horrible mess.
While I generally agree with most of your videos there's always one question that comes up in my head; Where on earth do you people find these teams where everyone wants to learn and be better? I keep running into teams that during interviews talk vividly about how they have CI/CD, automated tests and all those fancy things, they even show examples and whatnot and then once I start working all of a sudden none of that is important and the only thing that matters is getting the next minor feature out ASAP and the quicker you can shitfix it the better.
I've seriously been considering leaving development all together after a couple of years of job hopping from one backwards team to another in frustration over the almost complete lack of interest in improvements. What I've found is that most developers don't seem to care at all about the quality of their work. As long as it does mostly what it says on the tin, they're content.
I've experienced this same problem at nearly every job in software with any (software) language. No-nothing managers make the problem worse by delegating decisions to the team, which guarantees that the majority of mediocre programmers always shout down or outvote those trying to make things better.
@@cloojure Even worse when the know-nothing manager delegates all decisions to the self-learned wonderkid that "gets things done" and is online 24/7 to ffix things that never would have been broken if a semi-competent developer had done it a little slower.
Yeah, I think that’s the biggest issue here. How can we do things well if we keep getting shut down by our peers or managers?
I just went back to see "Execuses" again before moving on to something else.
Mr. Farley describes himself at 00:01 as a "Proffesional (sic) Software Developer"...
Give me a break... 🤣🤣🤣
@@rustycherkas8229 I'm pretty sure the editor is a different person, but yeah. Proofreading stuff is definitely a good idea, lol.
I agree about your opinion of feature branching, but not in all cases.
Basically two cases:
- interns or new employees, who I don't fully trust yet, especially when they touch important pieces of code, which will probably break everything for everyone (also addressed in the video)
- experimental features, until I'm sure, it will make it in
Okay this shirt takes the CAKE
it's SPICY, isn't it? 😝
Nerdness has no age 😂
Waterfall is the mindkiller.
I want to “like” and “subscribe” to these shirts….
"It's not somebody else's responsibility to give us permission to do a good job". I think you need to do a video just on this idea. Robert Martin has discussed this quite a bit with the notion of professionalism and being responsible for the code we write no matter what management asks. As professionals, we need to get better at pushing back.
UPDATE: as I think on this, I think this probably one of the most important ideas to pass on to our younger peers. Don't ask for permission to do a good job, ie, pair program, write unit tests, refactor code.
Good explanation. I still have some questions:
1. The code review question was really bugging me in the CD workflow. From experience code reviews done by people NOT involved in developing the code are the most valuable. They will spot things the developer(s) became blinded to or ignored for whatever reason. I am not sure "do pair programming instead" is a convincing answer but ok. I wonder how to convince a company to cut their dev force by half (intentional extreme) and letting them do pair programming because it's faster and more efficient. Is it twice as good? More perhaps? If so it could convince some bosses I guess.
2. The other issue I see is the commit history and how that ties to the work tracker like Jira. Do you hold on commits for when you have larger body of work? Do you commit simple typo fixes? Do you include a JIRA ticket in every commit? "JIRA-1234 fix typo"? Should it be enforced? How do you refactor? Does the entire history of you learning about the problem and refactoring it 3 times get to be immortalized in Git history? People are generally quite bad at writing commit messages if there are no checks for their formatting. I have worked with such code bases and the commit history was just useless. Finding out how certain change came to be was next to impossible due to sheer amount of meaningless commits. It is hard to track down the work item and people involved and the process or even the exact change... and when there is a regulator breathing down your neck... not fun. Feature branches that tie back to work tracker, trackable pull requests (code reviews), squash commits with enforced formatting that also tie back to the tracker solve this quite nicely. But they are incompatible with CD workflow sadly.
3. And one other thing I find quite problematic is how you manage failures? For example Google does not allow things that are not tested in the code base. If something slips in anyway and then breaks they first rollback and then try to figure out what went wrong. In CD workflow you would have the code broken most of the time unless people cobbled somehow their own private CI and run that before committing? Which is probably not always possible or desirable. It is the commit and pipeline that is triggered by it that verifies it. Rejecting it post-commit or automatic rollback are probably not ok in CD. So what happens then? Others wait for the guy who broke it to fix it? And if the "main" is broken most of the time other people like QA might struggle. Is the odd green "build" always shipped? Having confidence that main is "good" is nice as there is a place to fallback to or ship at the end of the day.
I understand that gate-keeping is slow but to me the above are valid concerns and I don't understand how CD workflow accommodates them.
1. The data on pair programming says that 2 people complete the same task as one person in 60% of the time, so not 2 for 1, but not faster. But, the quality produced by the pairs is substantially higher. The overall impact is that pairs are at leas as efficient, but probably more efficient than a single. The problem with being more definite than that is that teams that do pairing usually do a lot of other goid stuff too, so you can’t tell the effect of pairing vs other improvements.
2. The commit history still tells the truth, but it is a truth more like a transaction log in an event stream, rather than some kind of time based snapshot. Yes, include a reference to the reason (could be a Jira ticket) in every commit. You can take this further, adopt some conventions for commit messages, and you can programatically recreate clear descriptions for releases. I do a lot of work in regulated industries, we can often auto-generate release notes.
3. Well, part of CI and CD is to work so that the codebase is always good, all tests pass (CI) and work so that your software is always in a releasable state (CD). So no, you can’t knowingly commit code that breaks things! If you break something you “stop the line” and can’t release till you fix or revert the problem, that is what CI (or CD) means. Teams that work this way test nearly everything with automated tests, sounds slow, but is not because you spend time writing tests instead of diagnosing and fixing bugs. Teams that works this way spend 44% more time creating new features than teams that don’t.
I have videos that cover all of this stuff on my channel.
The way I understand it. Devs push to a feature or temp branch. A pull request is then made to main (sometimes a Dev branch deployed to a test server for QA). CI pipeline tests and other checks will have been configured to run on the pull request branch. When all checks have passed, only then is the feature branch merged to main (or dev), the pull request is closed and the feature branch deleted (no long lived feature branches)
@@ContinuousDelivery Thanks! I understand it better now. About the last point I still don't quite get it. You can run unit tests locally but until you commit and trigger the CI pipeline that does the integration and run integration/system tests you won't know if the system will break by your change or not. So you are not committing bad stuff knowingly but it will inevitably happen. What I found the most difficult when I was once remotely moving towards this (I work in DevOps) was the immense pushback from everyone. We could not afford (or thought we could not) any breakage of the mainline. It had to be always good so it could receive pre-validated changes from anyone, it could be taken by QA or released at any time. Every once in a while someone still managed to break it which resulted in sometimes significant slowdowns for others who could not commit/deliver waiting for that one guy to fix it... sometimes leading to rollback if the pressure was too high and the fix was not in sight. It still sounds like the commit should be pre-validated and rejected if it is bad or at least automatically rolled back after the fact but then there is a risk someone might have pulled it. Perhaps it is a cultural thing that people ought to be more tolerant to such failures? It is possible to prevent them but it is basically "gate-keeping" that runs contrary to CD as I understand it from your videos.
@@awmy3109 That would be my instinct/experience (so probably wrong as per the video) but it is against the CD workflow described in this and other videos. There should be no gatekeeping, no branching, no pull requests. Commit directly to "main" that triggers the pipeline. If it fails it somehow has be fixed quickly or rolled back (manually? by whom?). Which is odd because what people usually hate most is when something unrelated to their work like other's work break them. I know CD is supposed to solve exactly that but I must be missing something here. People will break others all the time and the solution is that it is short lived breakage? Or that it does not happen because of something else? Tests definitely help a lot but oftentimes they cannot be run locally or not all of them... I mean I physically cannot run 400 dockers on my machine to test that my change to a core library did not break any of them. CI will do that but that happens after I commit. Or as I do it today and as you describe I do it via "feature/temp/whatever" branch and it will run on that as I open a PR. Or I commit directly to main and pray. :-)
I resonate with you
All great points I agree with. Getting an organization to implement that in the culture is always a challenge. That’s the fun .Thanks!
Thank you for being clearly and giving evidence.
Only one point bothers me. Code reviews are not about mistrust. Its more about the different professional points of view, discussion and feedback - asynchronous and documented. But to be fair, you can do every of these aspects synchronously by pair programming - and this isnt about mistrust, too.
(maybe I cant get the difference of mistrust, untrust and suspiciousness, as a non native speaker. I mean not to trust the result of work)
Agree code review is not about mistrust, or shouldn’t be. As you say, pair programming is also not about mistrust. I have never seen code review that worked better than pair programming because it was more independent. That is dimply not a problem that I have ever seen, I have seen people catch mistakes in code review, but in my experience pair-programming catches more mistakes. My suspicion is that the “independence of code-review” is an after-the-fact excuse rather than a real effect - its a guess because people don’t like the idea of pair programming. Having tried both, many times, pair-programming has always worked much better for me snd my teams.
@@ContinuousDelivery And here is another fallacy: code reviews are not mainly about catching mistakes (although it doesn't hurt when it happens), but about sharing knowledge and ideas how to write code better (aka: refactorings). Also doing code reviews doesn't imply that "people don't like the idea of pair programming". One doesn't exclude the other.
I agree with most of the things said here. But one thing that CI proponents never seem to acknowledge is that it is difficult to impossible to coordinate all the different features and tasks into a release schedule. Features can span multiple iterations. Some claim that all code can be released every day to production, but in the real world that is ridiculous. If you do ci you have to wait until everything is a state to be released, and that is unacceptable in a lot of cases.
Needed to hear this to scoop some motivation to continue pursuing automated testing in the builds I contribute too: "The impact of designing for testability on the quality of our code is profound...testable code is more modular, more cohesive, has better separation of concerns, it also hides information better and is loosely coupled. All properties of high quality code" - D Farley
This man is a truly good influencer in software development. He really knows what is talking about. Not like other influencers in youtube that give bad advise to beginners.
I don't disagree at all, but to reduce the issue to "this is how developers should work" ignores the fact that waterfall comes from a corporate mindset. I have worked on some very successful projects in the past and features of those that stand out are:
- Empower developers to make the decisions
- Continuous user involvement
- (Obviously) Good automated testing that is constantly kept up to date
Oh, and no branching.
I have also worked in places where the requirements need to go to one committee to get sign-off, then the architecture/design needs to go to a different committee, the implementation plan to another and so-on. Where users aren't given time to participate in the development process. A lot of this is done in the name of "compliance" and "we work in a regulated industry". Until people on the coal-face are empowered to do things quickly and do them well, those companies are doomed to waste millions on delivering bad solutions late.
Well, ironically, the guy who "invented" waterfall, intended it as an exercise in what *not* to do, but like Scrum, management picked it up and ran with it since they're always in need of silver bullet solutions.
@@dauchande As St Fred said, there are no silver bullets. Actually, I would say that there is one: if you have good people (user and dev) who have the space and the freedom to create good stuff, then they will. If PMs want to dress it up as waterfall or DSDM or Agile or whatever, it makes no difference. If you don't have good people and/or you have a horrible bureaucracy, well then you're fucked from the get-go.
@@ACCPhil Agreed, good people will trump any methodology. But you can certainly slow it down through bad management.
I think it is good to not only look for evidence that supports your claim but also in opposition. It helps develop ones gut feeling but also makes it harder to miss some obvious benefit or problem one didn't think of.
Me too, that is how we arrived at Continuous Delivery, by trying lots of things that didn’t work, and being hyper-critical of every activity or approach. I still try to be, but these days find it more difficult to find the holes in the approach. If you point any out, based on evidence of better explanations, I’ll be very grateful.
PR reviews arent about not trusting the dev, they should be about having a second set of eyes to catch something the dev might have missed (no matter how experienced and trustworthy they are), and about giving other devs the opportunity to see parts of the code they haven't worked on before. Pair programming just isn't realistic for most teams most of the time.
No, that is just a code-review. The PR was invented for open source projects. Git was written by Linus Torvalds, creator of Linux. Incidentally, Linus says merge changes frequently to avoid problems!
@@ContinuousDelivery Linus also says you should be able to create branches all the time without even thinking ab out it, which is one of the reasons he created git.
Where I work pair programming is viewed as wasting developer time, which is sad. Even when I try, like today, to garner feedback I know the person is not able to dedicate the time needed to really understand the change I am trying to make.
Great content by the way.
"Continuous Integration" is term a devised by Grady Booch in the early 90's. It did not mean daily merging. XP came years later, and promoted the idea of the daily (or more) merge. Both predate modern branching as implemented in Git (by many many years). "Merge Hell" practically disappeared with git and the practice of shorter dev cycles. Most developers have never actually experienced Merge Hell, let alone from feature branching, yet use it as a major argument for daily merges. Whether XP's accelerated approach is better or not, it does not change the definition of the term. So let's all stop saying branching longer than a day can not be Continuous Integration. One may claim it is sub-optimal, but it can still be Continuous Integration.
Multiple devs working in an overlapping area of code requires adding noise and complication to the code base if they are to integrate their work when it is in an unfinished state. Devs hiding their unfinished code behind flags, or by "branch by abstraction", are still hiding their code from the other's execution path - which makes their "integration" just theater. Theater that comes at an expense.
Sure, but all of the modern definition of CI that I am familiar with include "at least once per day".
Sure, the Git tools improved merging, but there remain many teams that suffer. I consulted with an org that had, rather blindly, split their dev team (formerly monolithic team working on monolithic code) into many small "feature teams" because "that is what Spotify do". The teams found that they kept breaking things, so they pulled the code that they were responsible for into a series of separate branches. I met them 18 moths after this event, and their code had never compiled together since then. So it is probably better, but merge-hell is certainly still a common pain for many many teams. I see a more modern equivalent of it constantly in teams that claim to be practicing "microservices" these teams have each service in a separate repo (just another form of branch) and then spend a never ending fight to find a collective change-set-of-services that work together.
There is a difference between the information hiding in feature-flags, branch-by-abstraction, dark-launching etc, and that is that the "branch" is at the level of behaviour, not source code. That means very different things in terms of managing change across the code base. Hiding information in source code branches is a bigger barrier to change and so limits refactoring more.
@@ContinuousDelivery That is rather anecdotal evidence there, about this one company. There are thousands of other companies, that use regular git branching without any issues daily. Of course we're talking about SHORT lived branches, not long lived branches. But that does not need to mean 1 day.
It's rather arbitrary time period.
I think the nature of your experience naturally leads you to those companies and teams, that do not function as properly, because maybe they're not as experienced, or not as good and such teams are much more likely to need your services, than teams that actually know how to do their jobs. That is certainly going to skew your view on what is happening in the industry.
I am not denying, that many companies and teams have horrible practices, even thousands of them, but surely it does not represent the industry as a whole.
For those in doubt, keep using your short-lived feature branches, don't force yourself to fulfil some arbitrary deadline of merging once per day, if your work is not finished. In extremely huge majority of cases you won't suffer any bad consequences and in those few cases, where some refactorings lead to merge conflicts -> just use unit tests to make sure code works also after resolving conflicts and have those tests run also after merging to develop and you'll be fine.
There's plenty of data, that this approach works, because this is the approach used in thousands of companies, that deliver software daily.
@@vyli1 probably (also according to my own experience as some developers seems to have a cooperative approach but others just not have this) this way of working (CI) requires that developers also are using techniques like TDD, pair-programming etc, ie. devs are communicating a lot with each other. Having two developers working with the same task is a good way to improve cooperation in a team (not strict pairprogramming, but as a way to split work between developers, and forcing daily sessions between the devs). If devs dont communicate with each other, we will have the branching hell with the misunderstandings, and code that is stepping on other devs code, that slows the team down, which is the opposite of CI. So we should always strive to shorten the integration periods, if possible always having master in a releasable state.
I just don't even understand... does no feature take more than a day to develop? Wouldn't the most-continuously-integrated code be everyone simultaneously editing the code directly in production? I watched 15 minutes and I'm done..
Excuse me what was with the horrifying TV screen head vignette?
I like code reviews because you get to learn what other people do and spread knowledge about the code. Pair programming is harder to do if you work in multiple locations. I’m still keen on trying it out though :)
I've heard this before. If you really think (and I do!) that learning what other people do and spreading knowledge about the code is a good thing, then instead of doing it once, after all of the decisions have been made and interesting alternatives discarded, why not do it continuously, while the code is being developed?
@@MatthewChaplain As I wrote, I want to try it once covid is over ;)
I've done Pair programming via video call. Just find a tool that has no lag and good resolution and you're good to go. Only problem appears when your pair lives in a region with big time gap (more/less than 3 hours can be a problem sometimes)
Great talk, data is what separates opinions and reality. Loved the t-shirt!
Surf Arrakis!
Yes, had to lookup the t-shirt, found one similar but different that I liked more.
Great video again Dave. Always happy when i see a new video and I can send it to my freinds and coworkers.
As a project progresses, the amount of information you have about what you need to do increases and is refined. So the project should get faster and faster as the refinements to the models are smaller and smaller. This is what I experienced in the one project in my career that was done full design first, full TDD when moving to code (the only one where I was tasked to chose the process and drive a change). In most projects people refused or ignored TDD and CI and sadly the pace was instead getting slower and slower. This was dismissed as a consequence of the code base size and complexity increasing as if it was a fatality. But when you have proper cohesion, that should not have any impact as you always work on pieces of code of reasonable size. Unfortunately, most jobs offers will land you into the last category, as teams working with efficient methods have less turnover and do not need to increase their size constantly to compensate the slowdown (at least that's my hypothesis, I lack data to defend it)
💯Exactly!
Completely agree Dave! Sometimes opinions are defined by factual events, unfortunately. A recent experience, a new manager to the team exploded one day, at his inherited highly efficient and cohesive team, because we always pushed to main. "This is bull shit," he raged, "we don't do trunk based development, we need branches, PRs, and reviews." The team was highly productive producing top quality code. Well, the team fell apart and I got the hell outta there. Yep, the manager is still there with support from his senior management (and this is a fortune 50 company.)
Very solid advice! From the school of hard knocks: high quality comes from energized cohesive teams moving fast with building tests as a primary concern.
While I promote pair programming for numerous reasons, for it to actually negate the need for code review it requires one of the pair members to be "good" enough to have been a code reviewer in the first place. This is often not the case. Yes, 2 devs who are "almost that good" can level each other up and be good enough together - but this is also not terribly common in many teams.
Honestly curious: do you only allow senior (good) people to review code of "less good"? And if so, how do you determine who is good enough to review?
Because this sounds highly inefficient to me. A junior reviewing my code (or pairing) will teach me (25 years of experience, still a bad programmer) a lot. Sometimes about things I take for granted. Or new insights. Often by making me explain something, requing me to think through it, better. And will teach that junior a lot too.
@@berkes i endorse EVERYONE who hasnt written the code, needing to review it. For many reasons.
11:16 'There is no tradeoff between speed and quality'.
This is one of most valuable lessons Iv learned as a game developer. Thats so much true. Lets not fool ourselves, thats purely how proper software dev works and we - devs need to clarify that to the management, all the project designers etc.
I would so much love to be part of a team led by Dave. My work model, day to day, is far away from the one described in the video. It's a pity that my next step in career (which is pretty close) is Tech Lead and I never experienced a development environment like this. Thank you very much for the videos.
Thank you, and good luck in your next step, did you see this video on advice for tech-leads? ua-cam.com/video/jMpCF0Z623s/v-deo.html
@@ContinuousDelivery I actually did. It was another video full of very important ideas and concepts which I definitely need to take into account. Thanks !
Request: Talk about how an individual can practice CD in a team environment where CD is not a focus. If a developer works at a company that is very culturally tied to Waterfall and Code Reviews, Is it possible to practice CD on an individual basis? What strategies can engineers use to start to trickle up these ideas in hopes to get managerial buy in?
Well, you can always write your own tests and run them, even if no one else does. To practice CD you need more than that, and I don't think that you can do that alone, you need some level of team buy-in. You can certainly do some stuff to improve the quality of at least your own work though. For managerial buy-in, it is often to try an make the case in terms of productivity and quality, instead of technical reasons. I have a video meant to help with that: ua-cam.com/video/Fjde-h_wHsk/v-deo.html
I understand the positives you put forward for CI/CD, but I run into problems in practice.
So if you have multiple people working on the trunk, and they are changing/modifying existing systems, then you wind up having to have a LOT of split logic so that you can feature flag off the new code until it is complete. Add to this when you are dealing with external assets that go with the code (graphics, sounds, sprite atlases), and now you're needing to maintain duplicate copies of the assets and their loading logic as well. If a new feature takes weeks to implement with lots of testing and iteration and approvals, you have multiple features being simultaneously implemented (both new work and refactors), and the order and timing of releases is fluidly determined, CI/CD becomes problematic.
I think you will cut down integration issues, but at the expense of having to spend a LOT more time making sure partial functionality of in-progress systems isn't accidentally released, thus causing bugs. Also there will need to be a lot of cleanup work done throughout the code base to remove all of the feature flags and older code, again presenting an opportunity for bugs to be introduced.
I'm very interested on people's thoughts and experience on this.
Yes there are downsides/weaknesses to CI/CD and that's okay. I don't believe in silver bullets, just a series of tradeoffs. For your specific example, your tests need to be aware of what feature flags are enabled, what expected outcome should be in both cases, and you will spend extra time both communicating and cleaning them up when the feature launches. Engineers can and will make mistakes on deployment configs for what feature should be enabled, which can lead to breaks/leaks etc.
I think the takeaway from the data is that when you look at the entire project, on average, CI/CD (strengths and weaknesses) outperform better than most other forms of software development. Ask yourself how many engineers in your dev org would you trust to properly integrate/merge large feature branches versus how many would you trust to write a feature flag? How many better decisions could be made if everyone saw the code for every feature as it was being developed rather than right before it gets merged to master and promptly shoved out the door?
The biggest benefit of everyone using the trunk is automatically discarded by git the moment you understand that your local trunk is already a branch of the origin's local trunk. You're ALWAYS working on a branch.
@@marktroyer3128 In my situation, I'm in games. So I don't just have Engineers in the repository, I also have artists committing art assets and scenes and I have designers committing balance changes. I have Engineers adding updates to 3rd party SDKs that can bring a lot of changes, all of which needs to be tested thoroughly before release. And I can't do piecemeal releases to customers, because they require changes to be both correct/stable as well as fun. If we feature-flag changes, that can cross dozens of source files as well as configuration files and art files (which don't have a simple boolean or compile-flag switch).
I'm also working with very small teams, where pair-programming would instantly halve my engineering capabilities.
I'm not discounting CI/CD, just trying to work through practical solutions to make it more workable. Thx for the feedback.
@@Gokuroro True, but I think the argument for CI/CD isn't that there are no branches, but that the more often you commit back to the main branch, the less you have to deal with in the merge. Still, CI/CD assumes that the amount of development that can be committed back is simple enough to fit in a small commit, rather than changes that are more extensive changes or additions that require a lot of things to be in place before you can evaluate if the work is correct and the direction you want to move the code/project.
That's why you DON'T have Developers working on the trunk. Developers ONLY work on the Dev branch -- a SINGLE Dev branch for all Developers, mind you -- and submit their changes to it. An Alpha Tester tests the Dev branch when the Developers say it's ready for alpha-testing, then they flag the newest build that passes the tests for basic functionality. (alpha testing may be completely automated if you have a very mature development process.) Then an Integrator periodically merges the newest alpha-tested build into the trunk -- the Integrator is the ONLY person allowed to merge anything into the trunk. Then the Beta Testers get to work on testing the trunk, and flag the latest build they have tested for advanced functionality and stability. When delivery time arrives, a Deliverer pushes the latest build that has passed Beta Testing to the production environment. This can happen once a month, once a day, or once an hour if you hate your employees and want them to have nervous breakdowns from overwork.
About testing in different way than just execute with rest of the environment - it's not always best approach, because sometimes time for mocking out for example entire ECU behavior and tools used to monitor it's behavior is much more higher than just manual testing by running it. I agree with our statements a lot tho :)
Can you talk about how experimentation and prototyping works within CI? I can't wrap my head around it. It seems that if I'm going to make a major change to the codebase, I shouldn't commit those changes unless I know it is the route that we want to take. Same issue for changing APIs, etc.
Do the experiments and prototypes somewhere else, but discard the code snd keep the learning. Then re-write the code to production quality. Or make the change to production quality, and back it out later if you decide you don’t like it. Both work fine!
@@image357 Not really, because you throw the code on the branch away once you have learned what you wanted to learn, and then develop a completely new version of any code you wrote on the branch back on Trunk. The aim of the branch is to learn not to create code.
I was thinking about this a bit. Isn't dependency versioning a form of feature branching? "This software uses Version 2.4.5 of library X" is not continuous integration, but is is from my experience the de-facto standard in software engineering, because it becomes pretty much impossible to keep up with the entire ecosystem of different libraries for even a medium-sized project and always update all your dependencies to the newest and shiniest versions. As usual, I think the answer is not dogma, but that sometimes you should use CI to guarantee stability, and sometimes you lock down features and dependencies to guarantee stability (not CI). It is sort-of a project scale question and available man power - project management in other words.
Anyway, interesting video... food for thought!
As much as I like the State of Devops report, I find it has some circular logic in it.
The song says don't chase waterfalls. I think that is valid :). Waterfall is the JIT of software development. It presumes optimization that is fits into a perfect budgeting plan.
I've been using CI/CD thoroughly in two projects I'm working on recently.
It really improved my and my teams productivity, it's crazy how much time we've saved by (almost) not having to solve merge conficts and the annoying task of manually having to deploy new releases.
But do you do continuous integration or branching ?
@@arminvogt8690 No branching
@SuperWhisk pushing broken/crappy code is the exact reason why I cannot do this on one project so also interested how to handle this
@SuperWhisk Ah yes, we don't feature branch. But we do have a development branch where we commit our changes to. When an issue is completed we open pull request towards the main branch then the CI will kick in.
@@damienk777 So, could that be called CI but not CD?
I love the channel and the content and I am extremely grateful for the knowledge. Thank you! I also appreciate your awesome T-shirts :) ! The one in this video is wicked cool. Would be interested to know where do you source them from - I spent an entire evening searching for the exact "Pinky and The Brain" t-shirt you had some videos ago :), couldn't find it(I got a different one though).
Regarding the warterfall: My experience is (over many years), that in former times people (at customer and vendor side) were involved more exclusive and took longer time for planning. With increasing pressure on time-to-market and reducing staff (or miss to increase number of team members) planning quality got worse and worse. In my earlier projects all team members had more time on thinking about optimum solution and possible side effects. Also employees tended to stay longer and knowing better all processes and products in use. But at the same time ecosystems have grown.
So today less time and less knowledge (in the sense of increased environment complexity) make it impossible to do a good planning which results in a very more "trial-and-error" way of working in general.
Regarding waterfall, I've been privileged to work on an ongoing un-ending project that has to practise waterfall for various reasons. It works tremendously well and is truly the best way for that ongoing development and (believe it or not) continuous delivery releases of the project. I will say the reason it works so well is each of the stages between the layers allow for upward feed on analysis, so if when a given work piece is fully technically analysed the findings are that the work piece has substantially changed, the engineering team has the capability of feeding that back up the chain with confidence that they will be listened to and that they have the capability of calling a stop to a workpiece for some reason.
Now, I say this about this one project and full disclosure the team on that project from the top all the way down is relatively small, tightly coupled, there is no real friction between the levels and everyone has equal voice. This is not the case in many other waterfall's I've been in and I DO recognise it is unusual, but it's only fair to say :)
...I still miss working on that project :D
If you have an constant upward feed it's not waterfall by definition.
Excellent talk, thank you. I watched your pair programming a month or so ago, and talked my team leader into trying it out at work as we have various members with different skill set, just like to say, that we have now adopted this method after a trial period as it really worked for us. I can't recommend it enough now. Again thanks!
Thanks, pleased to hear it.
An abuela (grandma) kind of saying in Latinamerica goes "Despacio porque precisa", and it means roughly and verbal forms aside, "Go slowly, because it is urgent". It's often said when someone wants to forsake quality and attention to detail for quickness sake. Usually followed by an "I told you so, now do it again and do it right this time" when we the then youngsters didn't listen and rush over whatever the task at hand was.
When comparing two or more methods to achieve the same outcome, always be weary of those that always promote one method as better. In 30 years of IT I've never seen a single method always and I mean always without cons. An objective person with enough knowledge of a method should always be able to list the cons even though they may rarely apply.
An example is people who always dismiss waterfall or alternatively incremental or iterative approaches.
Well I disagree with the philosophy, All ideas are not equal, if you deny climate change or conservation of energy you are wrong. I can certainly point to the "cons" of CD. It is extremely difficult to adopt, because it means that everyone's role changes to some degree and that is very challenging. But that doesn't make it equally bad as waterfall. I have taken part in many waterfall projects during the course of my career and seen many more. My observation is that when they work, and the sometimes do, they only work when people break the rules. In fact, this is what Royce was saying when in invented the term. The man that invented "waterfall" was advising against it because it is too naive.
My experience of people who defend it, is that they haven't experienced the alternative, because once you have, you would not consider going back to the old, low-quality, ineffective way of doing things.
Dave, I Love your videos and the way you always present the many viewpoints about the matter. If could point one thing I didn't like much in this episode, it would be the use of too much animations. I found myself distracted by them very often.
Thank you for your wise words.
Some of the subtle ones I did like, it's a fresh new look. But I also did get distracted by the blinking question mark (that also influenced the text layout) and other bigger transitions.
I understand your strong opinions against feature branching but you should mention, that it works only when the implementation can be done within a day (and it's clear how the solution looks like). There are features that can take a week or longer to implement and that's not because the feature wasn't broken to smaller parts but because it's hard to find out how the algorithm should work in the first place. It happens to me quite often that I have to use try-error approach to find the way out and sometimes I even have to give up... What are you thoughts about that?
In top of that he always talks about the trunk as the one truth. That is not always the case. There might bei several truthes.
Approach as far as agile vs waterfall and other processes I find these things to often be secondary factors. Primary factors:
The computer language.
The development tools.
Lack of clear requirements.
Lack of a strong leader
Non-native speakers
Not having a test environment
Having coding standards such that all code released has no "flavor"
Using idea such as UML, model based development (Matlab/Simulink) or some other university idea that works like crap.
Not having enough test hardware or not investing in a emulator/simulator.
Software people
SO lets look at this in an example using 2 teams making software to control an engine.
First team uses Matlab and Simulink to generate code and systems engineers are hired over software people.
They have machine that barely meet the minimum requirements specified for the software.
They have a general idea of the thing they need to make.
There is no coding standard because the code is autogenerated.
The simulation is the test but no thought has been given to integrate smaller models into a larger model.
The team is all remote programmers some speak German, some French and some Chinese. All speak Mock-English.
Second team uses C
A specialized embedded compiler with limited debugging capabilities.
They decide to write code in Visual Studio and create a test harness that simulates the hardware.
The requirements are well written.
Coding standard uses a formatting tool to insure all code looks as if 1 person wrote it.
The team all are native English speakers. Some Americans, Some Brits and some Aussies.
Team 1 will produce far less code no-matter what process you use.
I could create several of these examples and the main factor of failure/success is never the process.
I didn't get your example, but I can completely agree with ... "the main factor of failure/success is never the process".
Adding to that, the different theories of process in all their forms have always had that one single problem - When theory meets reality of .. well the imperfection of people (as a broad term). The stricter and more unflexible a process is, the worse it will suffer from the meeting with reality (people); not to mention as soon as the person who pushed the strict process has moved on (and those kind of people move on/up fast), that process will likely stop being efficient because of "the imperfection of people".
i thought that your video was good but the example about the pull request approach has me a bit confused because linus torvald's said that it works on a "network of trust" ie he only pulls from people who he views as "not morons" and they do the same but I guess you aren't talking about open source projects and, actually what you said makes a lot of sense and I like the idea of pair programming because it seems like a sensible and mature approach. I liked everything you said then... I was just being a bit slow.
On a side note, one of the most important things I took from the video is that someone's own experience isn't enough to convince anyone else of anything and so you need to use science but I don't believe that anyone sensible realistically goes science first and experience second when forming their opinion. I think it always goes experience first and then using science to convince others. The thing about opinions is that they are not all equal (as you pointed out) and some people become wiser from their experiences than others and people who tend to have good opinions I think are always doubting their opinions and trying other things to see what the result is and are tempted to find the breaking point if applicable (when they don't have to) but this takes more time ie it takes more time to form a good opinion than to not. When I write code I always get curious about whether this is really the best way to do it and I spend a lot of time sometimes accomplishing nothing out of curiosity. People who are very fast at coding and who don't tend to care about the truth of the matter to the same extent do sometimes seem very good at getting things done quickly and I think often, if there is a problem, they maybe just need a strong hand from someone like yourself, who has good opinions, to get the most out of them.
I think another way to think of "opinions" is judgement ie how good is someone's judgement? Solid opinions are formed over a long time and judgements aren't necessarily but they're still closely related. Some people's judgement seems cr*p (too hair trigger/not enough taken into account or even worse: they are weirdly bias ie they're not being very objective), some people have better judgement. I have pretty poor judgement about anything that I don't _really_ know about (so not about a huge amount). When people have good judgment and their opinion's become more solid over time then those people are special IMO because they now have enough experience to be pretty sure that they are right and that makes a big difference. I mean, if someone with little experience reads something by someone like that and then reads something by someone else that is completely contrary then how are they meant to know with confidence what direction to go in? How can they interject themselves and their chosen direction on others when they have no real weight/confidence behind their own opinions - even if they listened to someone like you and they think they are right? (and they probably shouldn't try to). People with experience are special when they have good opinions (not someone like me, I don't have much experience). That's from my own observation and it is just my opinion that may or may not be correct. Opinions always change when something contrary to them happens, even if it's just an exception, it still changes them. Sometimes it can be painful.
I think that's one of the biggest misconceptions in our society ie that someone can just read a book by someone else, who has all the experience and who has well formed opinions and can be anything near as good as them by simply reading it. It might happen now and again I suppose but I think, in practice, mostly, that there aren't really any shortcuts ie i'm not interested in reading your book but I would hire you if I were in a position to do so and if it made sense and if you were willing - to lead a project. That's how people who are old hands (ie experienced), such as yourself, tend to make me feel ie I know I can't become you by reading your book. I can just listen to what you have to say and have an opinion but really I know I need to figure things out for myself. Otherwise I'll never be sure about anything ie if you say continuous delivery is good I'll give you the benefit of the doubt but truthfully - without having the experience myself : I don't know sh*t.
I also think that because of that fallacy : sometimes people who are experienced and who are good at what they do (from experience) aren't always as valued in a company as they should be - they actually aren't necessarily all that easy to replace.
Edit. there's a guy called "Uncle Bob" who goes around lecturing people about programming and he talks about how young programers these days aren't mature enough, how they don't do things the right way but I know they just need experience. They need to walk the path of fire that everyone like yourself and Uncle Bob has walked. _That's_ what they _actually_ need... and it does involve some pain because it's a painful path. It always is IMO.
16:00 One use case where I find pull requests quite effective is working on a feature branch in a remote team. I like to push my code to a feature branch before it is complete and open a "not ready yet" pull request as a way to 1) easily visualise my changes to the existing codebase 2) easily reason on my code with a colleague who for example is not physically there (e.g. in a different continent or a different timezone)
Reasoning "with" the code is in my experience useful, sometimes.
I have to add I do like the "branches that don't live for more than a day" practice.😊
On "you cannot do serious work without feature branching":
I worked on a government census project in Germany for the first 5 years of by working live. We where using Subversion (SVN) at the time. And branching in SVN was huge pain, borderline not working. So there where basically no feature branches. (Maybe two or three on top of my head...) It worked and we delivered. So are you saying this just was not "serious work"?
15:40 Sure, most people would prefer to work with skilled people. In reality many teams have members who are at very early stages of learning their trade and/or the tools they're using. Pair programming could solve the problem in a the team where majority of developers are skilled. The question is how to work with teams where the majority of developers are still unskilled and can not yet deliver code that passes even a modest quality gate.
I often find myself in the same environment I wish Dave was able to address this in one of the videos! Hope this comment gets more attention!
I've sent these and other videos to those unskilled developers you mention and it changed nothing in their approach - I find it very difficult to communicate with such people.
Good analogy, the world is locally flat but the further you go along the stupider that initial guess becomes.
Problem with some of this is that you can say 'I did X and achieved Y', or even find positive correlation between a practice and outcome, but it doesn't say much about whether X causes Y. To do this is little more than cargo cult thinking, and some companies do this; setting up practises which appear to follow the form of Agile while fundamentally missing the philosophy. What philosophy? Of you read Selfish Gene by Richard Dawkins you can see how evolution works through small changes, selection and iteration. All the unit tests and continuous deployment in the world won't help if users are not able to provide effective feedback and sprints are preplanned months in advance. I'm seeing 'agile' become empty; just a new wrapper around the old heavy up front planning model.
At a job I worked at many years ago, they decided to become "more agile" by amongst other things, ditching trunk development and switching to feature branches that were merged once a month by someone outside the development team.... note my ironic use of "more agile".
To play the Devil's Advocate for your point at 8:34 , you haven't necessarily established a causal link.
It may be that the devs who can keep their branches short and well integrated are simply _better in the first place_ , and so would be producing "better software faster" regardless of the system they used.
I've seen causation on multiple teams as we moved them to a CI workflow. The developers learn to decompose work better and deliver higher quality.
@@bryanfinster7978 That's great to hear! You gotta control your variables to establish causation.
@@HansLemurson you’re welcome to replicate our results. I’ve several years working with many teams in a very large enterprise seeing the same measurable outcomes.
I am a tech lead in a mid scale game development studio, I worked in AAA as well, while these tools help and the big studios specially do use them, if you follow recent news about AAA game releases you'll notice a trend the last 5 years, catastrophically broken releases, to the point of legally mandated refunds, there are many reasons of course, like bad management and unrealistic deadlines, however one of the factors in my view is that CI and TTD apply very well to problems with clearly defined (or definable, after some iteration) success conditions, this is great for technology, the engine, library integrations, OS support, etc. However is not well suited to behaviors that are clearly wrong but not in a way the software itself can figure out (there is some experimentation on machine learning for this, but its still a bit green and unaffordable for the medium or smaller studios). Another component is the myth of the theoretical designer, similar to the idea guy, (this is being pushed against, but for now still a big problem), the perception that the designer defines how the game should behave and iterates on those ideas and its up to programmers to make that behavior take place, the programming community is steering towards content driven behaviors, giving tools to designers and content creators to generate the behavior instead of coding it directly, this is far more testable, but the role of content creation and the responsibility of designers to dirty their hands with it are still not fully recognized by the design and production communities. Until this is fixed, we are in a situation where the only pass condition on a lot of the game code is the opinion of the designers, I wish we could automate that :P
Another huge factor is programmer training but that is beyond the scope of this discussion, though pair programming was mentioned.
I think that is a misunderstanding of SW dev in other fields. The destination is no more known than for a game. I do think that games bring some extra challenges, but they are different, not more extreme than in other fields. Writing software for medical devices, cars, or space craft bring different challenges too. Some of them at least equally difficult. I think this is a cultural thing in the games industry (and lots of others) we all tend to think that our version off SW dev is a special case, and it probably is, but I don't think it rules out these practices, they work in all of these fields. TDD in particular is even more important when the destination is unclear, because it gives you the chance to design SW that can change when you change your mind.
@@ContinuousDelivery I did not mean to imply these tools do not apply, they do, they help a lot, but they often get blamed for not fixing the aspects of the general problem that they are not meant to fix at all, when the symptoms do manifest in the software, which is supposed to be their domain. I have written (or worked in teams that did) extensive TDD for collision testing, data serialization, but writing a test to validate subjective game design choices is so far beyond my capabilities, its not when the objective is unclear that I don't think TDD helps as much, its when the state of failure is subjective, regardless of if it will be your definitive objective or change soon enough.
For these scenarios I believe a better solution is data visualization tools, for designers and testers to better understand the meaning and behavior of values.
Another big problem is testing external libraries, like a physics engine, you use the library to avoid having to solve that particular problem, therefore you don't invest in becoming familiar with its inner workings, therefore you are ill qualified to write tests for it, even for its use, its not impossible, and in a long term it should be done, but its an extra layer of difficulty.
9:04 I'm the only physical person working on the file, and different branches are different parts of the program that may be incomplete. So merging them each day(don't even make progress each day) would jumble things together that aren't even connected, and are independent of eachother. (ADHD, I bounce from part to part) In this case, wouldn't it be better to wait to merge, so you don't end up with one master that has a ton of incomplete projects with unusable code muddling everything up?
Though, tbh, I'm still in the setup phase, so everything is being pushed directly to master.
15:06 Yep, this is how my "team" works. There is no trust between the member(singular) of the team.
My principal issue with CD is the 'organic' growth of the code, commiting quickly mean commiting partial work and patching around to make it work.
in the end the systems look like those old 1 room house that get extended by adding room, you see some in rural area.. until some one decide to just raze it and build a mansion..
Waterfall development was required by ISO software certification ;) Not sure if waterfall is still in ISO, but ISO is usually required in medical software development.
The thing with waterfall and it's advantage over agile is that it is much more controllable and has distinguishable, planned steps. Not saying that waterfall is better, but just that it gives more control and is better planned (and agile really sucks at planning ;) ).
In some cases, were planning and predictability is crucial, waterfall is the way to go - and I think waterfall can be done reliably and good, but it requires very good developers at the team that really know what they are doing, while agile doesn't have such high requirements.
When I am doing some project by myself, and I am doing it with tools that I know perfectly and I am doing things that I know perfectly how to do - I don't play in agile, I am going straight into waterfall and I get the proper documentation, proper plan, everything goes well from start to finish and everything is within exact time estimates. In such cases agile would be just a waste of time and quality.
In most regulatory frameworks I have seen, waterfall is kind of assumed, but not mandated. I can't remember specifics of ISO, but usually you can deliver what is being asked for in a better way.
Many of the situations here describe my own experience but with people that refuse to change it is very difficult to get the much needed change.
Reason number 8: Failure to protect the customer from themselves.
A personal mantra I developed over my career: As a Software Engineer, your first priority should be to defend the system against the good intentions of the customer.
Customer in this case, being who ever is asking for the development to be done.
They are the first to fall for buzzwords, the first to demand illogical and useless features, the first to ignore business process and ask for alterations that would kill their own business.
Protect your customer from themselves. It saves them time, money and helps their business stay alive. By extension, it keeps you employed.
How would you go about enforcing design patterns the team has agreed upon without a code review process?
I.e. I am an SDET. One pattern we follow in our end-to-end test suites is the Feature -> Test Step -> Page Object pattern. It would be easy to not follow that established pattern and can lead to a confusing code base if the pattern is not followed.
EDIT:
I see now that you would probably recommend pair programming. Unfortunately, in my current job this would mean pairing up with people in timezones 12 hours away; not something that can be easily done. The kind of synchronicity required for pair programming is not something I think we'd be able to implement, unless I'm missing something obvious.
In the past I have worked on teams that wrote static analysis tests to assert this kind of thing. One of my colleagues wrote this, Freud (analysis for code) to help create tests like this github.com/LMAX-Exchange/freud
@@ContinuousDelivery thank you very much for this resource! It never occurred to me that static analysis would help with these kinds of issues.
What hapens when the feature you're working on takes longer than a day to implement?
Waterfall projects have very low efficiency but if its really required it can be done right but you have to expect to spend the largest part of the time in the early stages to actually get them right, probably often several thousands or even tens of thousands of percent more than what many people imagine.
The obvious downside is of course that it is hard to measure how well a project is going until things start to fall in place and no one really wants to work several years on a specification that fails real world tests immediately.
Avoiding waterfall in any situation where it can be avoided should definitely be the first choice.
Hard to pair program though when developers in a team are all in different timezones and don't share work hours. Not sure if there's a better solution in this kind of team than PRs
yes, I'm really struggling with this issue too. It is very frustrating.
I think in this case PRs are good. When working remotely there should always be some overlap so you can talk to your teammates. Having worked in such an environment, we put much more focus on design. One person always made a proposal and then discussed it with the other one. This allowed for short effective sessions. And when it came to writing the code everyone involved knew what to expect which made reviews much smoother.
Goes from a very evidence based critique of practices to an opinion based critique of code mainlining without a pause. Google does have code reviews, which are indistinguishable from PRs, despite holding up Google as an example of a large codebase that does not use them (I guess to fit his narrative).
That is a conscious choice asynchronous "teams" and companies make: you cannot work together properly, and will have to optimize for parallel work and more upfront planning and coordination, as opposed to less planning and coordination required for synchronous teams.
@@ottorask7676 Seems it's more often an unconscious choice. It's generally treated has having no impact, when, in fact, it has a very significant impact. And so teams are built remotely, and the recognition that this will mean more effort put into planning, more effort put into communication, etc, is lacking.
Yep, the more changes you make over a longer time, the easier it is to integrate them back together. That's how we know that if we introduce a koala to a polar bear they will breed to make the perfect vegetarian predator. Works every time. Well, except for the times we've tried it, but we can all believe what we want!
I was _wondering_ where Drop Bears came from...
I do have one question about Code Ownership: how do you avoid that, and more specifically, how do you avoid it in small teams delivering reasonably-sized projects? No one can be an expert in everything. An Android front-end developer might tweak a thing or two in the back-end service, but they are very unlikely to make meaningful changes, and then back-end developer might be able to maybe tweak some things in the Android front-end app, but will be very unlikely to deliver a full feature. So unless you define "a project" as "the backend service" and "Android App" (so all team members at least share the same problem area), you do end up in a situation where people rarely touch each other's code.
The real problem is this: Do TDD, CI and Clean Code is HARD at the beginning and to be effective you have to practice a lot! So a lot of programmers just say “Oh! This does not work in real life! Stop it!”. But it’s just laziness. I start to do TDD and I’m going really fast. I don’t spend much time on manual tests because all my tests are solid. So I can just run the tests without running the entire program!
But it’s really hard to try to explain how I go fast when I do TDD. Anyway, nice video! Thanks!
Yes, I think that is true. I wish that when children were taught their fist lines of code, it was in the context of TDD - we'd grow much better programmers if that was what everyone believed "coding" to mean. When you learn Maths, you have to "show your working" when you learn to code "show your working with tests".
Something I heared at work : 'I don't need tests I have confidence in my work'
Are they any good at making coffee?
@@ContinuousDelivery Good at making Orange Juice 😆
Thans for this compilation of great arguments on why quality is always the win! Love the channel and definately gonna get the mentioned book.
I am sure you are having some valid points but people with strong opinions reject to learn new things!
Somewhat disturbing. I need to go thinking. Respect for not presenting the simple answer but rather a framework for rationale.
Good stuff, and i love that T shirt. you truly are a legend
How do you do effective peer programming remotely? Sometimes asynchronicity is nice. Another problem is you cannot impose peer programming if it is not in the culture or the company, or at least on your team. So you might not have a choice. I wonder if there is anything that can be done on those situations to improve things.
What if you are programming alone on a personal project? I guess that is raw TDD by yourself right?
What happens on big changes that are not easy to split? Those changes that break a lot of things at once, like a language or framework version upgrade on a large codebase in which the code is only partially covered by tests. Specially when that language is an interpreted one, so there is no compiler to help you discover issues on code that is not covered by tests.
I’d be interested in thoughts about software systems that have hardware in the loop, or hardware at the end node if you will for example IOT devices, vehicles,… Autonomous vehicles… How do you wrap the last layer in an automated test?
Modular architecture, use "ports and adapters" at the edges so that you can do the vast majority of testing in simulation. This allows the HW to change and not break the SW (only the adapter) and the SW to be developed before the HW exists.
This approach is difficult in hardware only to the degree that you allow hardware induced concurrency to "leak" into the software, so design the SW to manage the concurrency. Designing the SW to be async, helps a lot with this strategy.
The new Muad'dib of programming! (t-shirt ref :p). Very useful content!
🤣🤣 SHHHHAAAAAAIIIIII HULUD!
I have personally found that Unit Testing has been incredibly useful NOW at catching all the "null/starting" state things a human mind can miss - to often I find I develop code as though the system is "mid state" - (is it just me? Maybe!) I have found it really helps focus SOLID principles (esp the D and S). And finally, most (useful) software is sufficiently complex that we cannot predict all they interplay, and UT is a solid first step to help manage that.
It is an interesting approach, would you have an hypothetical example of developing as if the system is "mid state" ?
Do you mean like if you could code unit tests to be run in a live system? I can see it for a stateful Class instance, but not for a pure function, so I suppose it is rather for OOP?
@@rafeu2288 All I mean is that I often forget to properly account for an "empty" system - probably because most code I've ever written has been into existing systems. If it weren't for unit tests - my code would break the system when "starting with no data"!
The testing ideal you've described sounds great, but my reality is different. We'll make a change, deploy to test, then perform some ad-hoc testing the change behaves as expected. Then we'll run the automated tests which have years of investment, of which a bunch will fail. We'll dutifully work through these, but 99% percent of the time we'll end up changing the test, not the change we're introducing. In my mind when the tests have these false-positive results they have failed to provide value, and are instead a liability in terms of the effort to 'fix' them. No one around me though seems to share this view. Is it just me? am I going crazy? there has to be a better way.
What you describe is a common symptom of bad tests. One of the causes of this problem is writing the tests after the code is finished rather than writing the tests before. Writing tests afterwards means that you end up testing that the code you wrote is the code you wrote. The tests are tightly-coupled to the code, or system, that you are testing. Writing the tests first tends to make you focus more on the outcome you are trying to achieve. This is a good thing, because desirable outcomes are much more durable than solutions. This means that the tests are less likely to be "wrong" and so are better at telling you the boundaries within which you can safely change the code. If you have tests like these, and you change the tests, you are changing what the SW is supposed to do.
I'd start by trying to introduce some tests that look more like these "outcome" focused tests for new work, and maybe that cover key behaviours of your system. Then decide wether it is worth re-working what you have now, or dumping them and replacing them with better, outcome (behaviour) focused tests.
Dave, how would you respond to this slightly changed statement from the presentation:
"These ideas might work one simple web apps (or the greenfield) but not on my huge legacy system that is 20y old, a mess and needs to still be changed because a bank/government branch/insurance etc depend on it."
I have seen such code bases. Lots of devs (30-40) that commit into a codebase that is basicly a "ball of mud" where everything "could" destroy anything and where you have to test the whole system at once because of that. (No separate modules) And where therefore testing takes hours and red tests are hard to trace back to a single commit.
How do you takle a situation like this. What is the strategy out of those vicious cycles into a mode where you can actually implement all those strategies you describe. (which I would have prefered when I was there)
Most (not all) big orgs that practice CD now, started from where you describe. It is rarer to begin with a green field. Size of codebase is not an issue, you can do this with very big repos and codebases. The problem is the culture, the poor testability of the system and sometimes (reasonably often) very inefficient deployment - if it takes 2 hrs to deploy your code, you won't be able to build test and deploy it in an hour! So you work to optimise these things. Usually starting with eliminating manual regression testing, which quickly leads you to automating config management and deployment.
@@ContinuousDelivery thanks for the answer. That specific system has already a comprehensive automatic test suite. The integrative tests need >=1.5h. The e2e tests with the ui need the whole night and a fleet of computers.
Problem is the "big ball of mud" : the business processes run throw nearly all parts of the system and the shared data model increases the side effects even more.
Thus, you never know if you destroyed accidentally a remote part of the application. And testing after each tiny step is prohibited by cycle time( and resources needed.)
Defining and enforcing modules/components in such a system would be my first guess to move forward, but such refactorings are costly and compete with the cry of the business for features (also some business features are legal requirements)
I would be interested in tips about how to untangle such a mess.
Hey Dave, great talk! Thank you for sharing your wisdom. Quick question: how would you solve the pair programming problem on a team whose members are spread across multiple locations? My team has contractors located in several different US states, and most of my teammates are deathly afraid of abandoning our PR process for code review.
Remote pairing works very well, all you need is the ability to share a screen and a shared repo so you can hand-over control on commit. Convincing people is the difficulty.
The only times I write sub-optimal programs is when I am forced to work in sub-optimal ways (DevOps, Agile, Scrum, to name a few). I get the best results if I can make my own decisions (languages, libraries, frameworks, methodology), but unfortunately that's not always possible.
You say that you're against feature branching, but then you say as long as you're merging into master at least once per day.
I'm guessing this means you prefer trunk based development, but you think it works fine if someone wanted to go with a workflow of feature branching, but doing a PR to merge into master at least once a day? I could see me being able to convince my team to do feature branching with daily PR's, but I can't see them doing trunk based development, so I'm wondering if you could see any issues with that approach.
I also think it would be nice to maybe explain how to handle schema changes with CI in a future video (I had a look, but I couldn't see any related to this currently).
Handling code changes with feature switches seems reasonably easy to do, but I can't quite understand how managing Db Schema changes should be handled with CI.
T-shirt game on point!
Yea, the talks are great - but have you guys noticed Mr. Farley also has the greatest t-shirts ever?
I have listened to several of your talks on this channel... but this one is remarkable because it provides me with all the arguments to convince my own team to change the way we develop our software. Automated testing and refactoring are difficult to introduce in a team when the software has already been written without these practices in mind. It's kind of difficult to add tests to an existing software because it hasn't been designed for testability in the first place.
Anyway... thanks for this talk !! It will help me.
I love watching your videos, not least because of the awesome shirt collection you have! ;)
"At the moment when you write the code, also writing tests is clearly more typing."
I don't know why but that line cracked me up. XD
Onboarding and knowledge gaps in self taught drop band on computer science degree. Standardized interview process, very clear road map. No discrimination for fluent English.
On the topic of CI vs feature branching. I am more inclined to agree with CI after some experience in most cases.
Are there any uses for branches as a tool of git that we could use in our workflow?
One example i can think of when we need to update the one module, and we are not sure if we want to keep the changes or not. And duplicating module and changing another wouldn't work because module is used by other modules.
This seems like a good approach and I really want to try implementing it but I have a question.
Let's say 100 people are implementing features and 10 people don't finish. Their unfinished commits are still on master but the team has to push a new release. What happens then?
Unfinished code (meaning new features or a replacement feature) should be hidden behind some kind of config. Small refactorings and minor changes don't need flags but should be covered by automated tests. Early in development we hide it behind a compile/packaging-time flag within our build scripts. Later on we convert it into a deployment-time flag (or maybe even some kind of dynamic plugin-like config) so we can have different behaviors in different environment and test with/without the new features. When the feature is in "open beta" we change the flag to a runtime/properties/preference/opt-in/something config. Whether this is only done in the UI or something deeper down in the system depends on the kind of change.
Now, I really dislike the question "Let's say 100 people are implementing features and 10 people don't finish" because it implies everyone works alone and one single person having a bad week can make the entire thing fall down. Talk about bad leadership and development practices.
Either the entire team fails or succeeds. If 90 persons are "done" and 10 aren't; what the hell did those other 90 people do? Sat around fiddling because they "did their part" and already gotten the gold star from the boss? They didn't help? **It's not the 10 people that got stuck fault some plan went bad, it's the other 90 people + the leadership that needs to get their heads straight.**
I’ve done large, multi-week refactoring of a complex rules engine with daily release of my changes using branch by abstraction.
@@ddanielsandberg I’m on a three person team. We failed the sprint goal because I couldn’t deliver what I committed to. Could be my fault for not speaking up early when I felt like I was stuck. Man did I feel bad.
I can almost agree about all of this. Especially of doing data based reasoning. And when he mentioned it's possible to do Code Reviews without PRs (or what he probably means more generally: feature branches) I got interested. But then he sets up the straw man that code reviews are about mistrust in team and starts attacking it on that basis. Then he concludes that you don't need code reviews because they can be replaced by pair programming. The reasoning? Because the data shows pair programming leads to better software, but that wasn't questioned in the first place. The data also shows that code reviews lead to better software and certainly they do that in combination with pair programming, so why should either one be expendable? He doesn't give any data on this though.
So in the end I still don't know how to do Code Reviews without PRs .
How do you do CI on an open source project where many contributions comes from untrusted third-party developers?
It was funny to watch how a team (which uses CI) plans for changes rollback after they discovered the stuff can't be finished in time instead of just not merging a feature branch
Came for the knowledge, stayed for the shirt.
Thanks for your awesome content 👏
I just had a look at State of DevOps report for 2021 and in the pdf they're talking a lot about the "Platform model", is it something that would be a good topic for another video?
Man, I wish I worked at any company that works this way. Here in the real world, we have massive, 25 year old code piles that were developed in an ad hoc way with what ever the technology du jour was at the time it was written, in several languages at the same time that are incompatible with each other and given zero time to fix technical debt or even get a consensus on what we're building. Even the actual build system is so complex and brittle that any change means potential days of work.
But sure, *I'm* a bad programmer.
A couple of comments...
First, thanks Dave for these videos. I first learned about and used XP 21 years ago. I had just left a massive waterfall project that integrated code every... well, once about a week before a release. Needless to say, CI made perfect sense to me. 😂 The other XP practices all resonated as well, although TDD did bend my mind for a couple of weeks while I learned how to do it.
Second, you hit the nail on the head about the Pull Request process being created for distributed open source projects. What gets me, though, is that no one ever seems to see the latency in that process as comments go back & forth. IME, developer's will submit a PR and then start something else, only to be forced to context switch when the first comments arrive. There's also an aspect of the sunk cost fallacy at times because a developer has potentially poured so much work into the code before the PR is ever submitted that they will naturally be more defensive about criticism. Hell, I felt it myself the first time I worked in an environment like that!
All in all, a great video and I agree wholeheartedly with all of your points!
Thanks, XP got a lot of stuff right!
Sorry to be the cynic, but none of this matters in 99% of work places. All of these things get lip service, you join the team, then you look at the source base and nary a test is to be seen. The CI/CD servers have cobwebs and their reports are garbage. "We don't have time to stop for gas," is the motto.
This led me to a tangential thought. I think a lot of management know that Agile is fab and waterfall is bad, but only on the basis that they understand waterfall, understanding of Agile is often as deep as the name itself and its a great name for a development process. If it was called "Incremental" lets say i'm sure some management would not subscribe to it. The crunch comes when projects are implemented in an Agile way but the expectation remains of 100% completion of the original concept (plus months of additions and tweaks) as if it's waterfall++.
Absolutely. Certainly the commonest form of "Agile" that I see in practice.
Thank you for your video, it was very helpful. I've been watching this channel and reading about CD for about a year. This is the first time I heard you mentioning working with PRs in an open source development. This is exactly my case. I hope you have more suggestions, because those you mentioned are infeasible for me. First of all, in my case, noone is trusted to push commits unilaterally. This includes myself, who made this decision. Pair programming with one senior developer present wouldn't work for logistical reasons (e.g. timezones and availability). Historically, some problems were only spotted when more than one person was doing the review. And there are some requirements I don't know how to automate, for example logical grouping of changes into commits. The way I try to address these problems, while using PRs, is to provide training to more junior developers, doing the code reviews in group, and having a script for automatic rebasing of PRs (only fast-forward merges of PRs are allowed).
Is there an alternative way to do trunk-based development without pair-programming? It seems trunk-based requires pair-programming when you have juniors in your team.
Yes, you can organise reviews differently and still do TBD, but it is not as good as pair programming. How are you training your juniors? Pair programming is by far the best way to fo this, it will get them up to speed MUCH faster. If you really can’t do it, then just have someone monitor the juniors commits and review at that point, reverting if necessary (and probably then pairing!)
I have the feeling you are a fun of David Deutsch. May it be because he also uses "constructors" in his theories? :D
😁😎 I think “the beginning of infinity” is probably the most mind-expanding book I have read.
2:58 Now i understand why waterfall is not very popular and even why a waterfall methodology project would have troubles, that said there are projects and some that could only use a waterfall model. Especially when making a safety critical system that is tightly coupled to a hardware. Hardware development can be slow and costly. Depending what we are trying to create. If we are for example trying to create a software system to control a car engine, brakes and other vital components, that are safety critical, we do have specifications at the beginning of a project and very rarely these specifications change during the project. As such agile methodology here does not make sense. You could in theory deliver parts of the functionality for the customer = car manufacturer but this is only useful for the purpose of testing / validation and not actually usable in a real product. Imagine having a car with firmware able to control brakes and missing engine control. So yes there are cases where waterfall model is preferred. After all - It would not exist if it was completely useless. However im open to be proven wrong.
Even for those kinds of systems waterfall is now pretty much discredited, it is not how the two biggest car manufacturers in the world make cars for example, including all of the software that controls them. It is counter-intuitive, but the data says that it is even more important to work in small steps for safety-critical systems than for others.
@@ContinuousDelivery Hmmm maybe this topic is worth a video explanation that goes deeper? Because i think i see a point delivering parts of work on a constant basis in order to keep momentum and quality... but i do not see a point delivering a half finished product to customer?
I get frustrated when most programmers focus too much on technical details. Sure, I love the tech and how to express stuff in code, but the most important stuff is the problem that we are gonna solve, and the behavior needed. Software developers tend to create a lot of technical complexity and subsequently technical debt. Hence we need structure, like that which DDD gives us. But yet, I have encountered some resistance to the idea "That doesn't work. You cannot map reality directly to code. We have to create all these classes.". You can for the most part. It just required the skills and right perspective.