If someone tells you that there's only one good or best way to properly build software, regardless of the project scope, project type, language used and team make up, be afraid! No one process is flexible enough to meet the demands of every possible implementation. It's almost like a certain channel owner is trying to sell books or a training course on a competing subject.
(CI, CD, and TBD have all been proven to predict (yes, "predict", not "correlate with") higher performance in software organizations, as per DORA and State of DevOps reports. You can learn more in the book Accelerate if this topic interests you. The book overviews the research methods and more.
Indeed. For example, we release every 6 months (used to be 1 year too) a new version. It's a windows desktop application, users have to install/upgrade manually by running setup.exe. Just FYI, such things also still exist. (it's a financial business application with more than 20 ys codebase)
@@zzzzz2903 we build financial software. One of our products is a Windows desktop application. The teams that build it use CI/CD. They always know what state the executable is in, though they only release on a predetermined schedule. I don't know why you think there's a conflict there
@@joshbarghest7058 "Continuous deployment is a strategy in software development where code changes to an application are released automatically into the production environment." -- So if you release every 6 months, what do you mean with CD? Also, there is no "production environment". There's 600mb setup.exe. Based on our big customers update cycle (which is sometimes years!), they pick the latest setup.exe at that point, and upgrade to that. Again, what is CD here?
You guys ALWAYS forget non-web applications. In case of these genres "continous" only means "as frequent just as possible". In embedded world, the most frequent release cycle can be even a month long. (Or, probably, there will be only 3-4 releases at all.) And we are not allowed to release, hm, not-too-stable stuff (I tried to re-phrase the word "crappy"), because it's not option to wait for the next release for fixing it, because a bug might be dangerous IRL.
@@barneylaurance1865 not always. For a period I worked on projects where we could hurt or kill our testers if we didn't take proper care. so for safety reasons we had an extra branch for test and before we did manual releases to the test branch we actually went through not only automated testing but also code reviews (manual and automated) before releasing. Yet, mistakes happened, though no-one was hurt as long as I worked there.
@@karsh001 OK, so you had to delay delivering the code to the testers to do code reviews for safety. I'm still not sure why you have to delay delivering it to your programming colleagues. I guess you work with an emulator or something so you don't injure yourself when you're writing it.
Instead of choosing what git strategy to use, its better to beef up the testing first... Whatever git strategy you use, it will be useless if you don't have proper and robust automated testing
@@andrealaforgia5066 I believe the test suite is the premise for the whole CI thing to work in the very first place. You could blindly (without any test) commit to the trunk but then, when an issue occurs ("is discovered" would be more accurate), there's no way to tell which commit causes it. That takes time to investigate and make people doubt the CI approach. Sooner or later, they will switch to the feature-branch approach make sure issues are well managed/isolated which actually gives a false sense of security. Adopting CI is matter of choice but having a robust test suite is the matter of implementation.
Testing using git-flow is much more aligned than having a haphazard trunk based flow approach. Git-flow naturally allows a dev branch to be properly tested during a sprint BEFORE it merges to master (our single source of truth) and BEFORE it gets released. GIT-flow also helps to manage release notes.
Amen to that. Im currently working in company which do all testing by hand. You hawe no idea how many restless nights our tester yes a one tester hawe.
@@mikebell184 You are using the Horse-and-Buggy argument. "Our horses work just fine. Horses are better than cars because of XYZ." Yes, GIT-flow WAS amazing. It was great for its time. It's time to move on. No more, develop, master, hotfix, whatever.... It's time to have 1 source of truth. Whatever processes these are steps you would do the test/catch bugs before you merge develop into master, do those same exact processes and steps to each individual branch before it makes it into trunk. So that trunk at any moment is releasable. There's no ambiguity on whether trunk is ready or not.
@@kishanbsh I don't exactly want to give specifics, but it's pretty highly regulated, meaning that every development needs quite a bit of design and approval from higher ups. We often work on developments that are quite large and can be rejected by senior people at the last minute. Removing integrated code is much harder than just integrating "manually", i.e. git merge, as soon as we get the green light.
Looks a lot like good old waterfall... There are better ways to work, but not every industry adapts at the same pace. I guess you can't release every day or week but more likely every month or trimester am I right ?
Not every idea is feasible without changing your mindset. Last minute changes? - Bad. Rejected at the last minute because of senior people? Why weren't they there sooner? - Bad
I like branching to isolate changes which "aren't ready" from everyone else's changes. But I also like frequent rebasing, so that everyone else's changes aren't isolated from the branch. ie: the integration is continuous, but unidirectional. And this also encourages one to break changes into the smallest useful unit, as "being done" has a direct incentive: not needing to be the one who deals with that integration. it's very similar to CI, but admits that some changes really do take more than a day, and that merge-commits act as a useful label for grouping related changes together.
Always get latest, deal with the fall out in your branch, squash and rebase on top. Nobody needs to see all the crap commits that went into building the delivery. Next argument will be but I've got loads of individual parts... Your po is doing a crap job of managing the project and breaking things up wrong... Suspect your using jira which teaches baaaad habits.
@@marshalsea000 I've got loads of individual parts, and I'll break them into the easiest-to-read commits. Squash the corrections into the original, but don't make me read about a change to the API at exactly the same time as the new method which justifies it. The justification belongs in the same PR as the change, but not in the same commit
@@marshalsea000 I tend to call any defined interface an "API". made-up example: needing to support a new type of authentication token, so commit 1: add a new "token type" parameter / ensure it is accepted; commit 2: add support for a second token type; each commit can be read in isolation and makes sense on its own, but the first commit is only justified by the presence of the second commit, and the second commit requires the first commit as a prerequisite in order to be a non-breaking change.
But how do you assure that the code in your branch works? You are running all the tests (including Integration and End-to-End tests) on your machine each time you merge or rebase your local branch? How are you sure the code will work on some other machine before you merge your branch into main?
14:20 again with the claim that feature branches aren't being tested in integration with other changes? But it's perfectly possible to have the CI server merge changes from master before building - and notify you if changes can't be automatically merged. And yeah, that means you're not testing the integration with work on other feature branches - but as you said, that's the intention, to give other teams the time to refine their work. I'll keep following and listening, Dave - but there are still two unanswered questions for me with regards to trunk based development. One, how do we avoid wasting everyone's time with half baked code that needs more than a day to set? And two, how do we do code reviews in practice? These two issues compound, in the form of half baked, unreviewed code ending up in production daily. While that may be acceptable in some environments, it's another situation for teams working under legal oversight or with life critical software - are you really certain this is right for everyone? I'm still watching, but still don't feel like the central issues are being addressed. 🙂
Ensemble working and continuous code review are what you're looking for. You cannot inspect quality in, quality has to be built in. As for half-baked code I don't understand what you're talking about, why would anyone commit code that is not complete? You can hide partly developed features and changes behind feature flags for instance, if that's what you mean.
@@ottorask7676 "why would anyone commit code that is not complete" - because they don't want it to get lost, for example. "behind feature flags for instance" but feature flags beat the purpose of not having branches which is "finding out that my code is wrong as soon as possible" .
3 роки тому+7
You think that "one day" is literal? fetch and rebase is the answer.
From my current understanding of the topic, the most important part of the trunk-based approach is to have tests. Not any hollow unit tests but a complete test suite composes of different kind of tests: unit test, integration test, functional test. A test suite that when you see the GREEN, you know that this is production ready. Every single breech on production should be treated seriously to enhance the test suite. So for any commit, either we get a green on the test suite, or we rollback the commit. Then we don't really need to do the code review on every commit, and this could be a review/improvement process even after the code is committed, not a safeguard check point. As long as we treat every kind of breach seriously, code review shouldn't be an issue. As for half-baked code, for every new feature, there will be multiple commits until the feature is usable. Though, as long as those commits are not breaking current system (passing the test) that should be fine. The feature could be hidden until it's complete but we're still be able to test the new feature together with the current system. So at any point of time developing the new feature, we know that the partial feature still works well with current system without enabling it for end user. And you don't have to release every commit to Production on a daily basis. Still, with the CI approach, there might not be a clean cut where we could find a commit with no half-baked feature to release to Prod. That's exactly when the test suite gives us the confidence to release to Production. If everything work in tandem such as this, there shouldn't be any issue applying this approach. Then, it's crucial to make sure everything works in tandem.
Very strong disagree with not having branches imho. Having to work within the CI workflow is extremely annoying when developing entirely new modules to a repository with few dependencies and that no one actually uses yet. In those cases, you very definitely do want to have a feature branch. For making small changes to an existing module, this is less of an issue.
Problem is, many companies doesn't use CI/CD. For Pete's sake, many companies doesn't even test their code before committing ("there's no time to write tests", "it will take too much effort/time/assets etc, maybe in the future", they say). So we have to stick with feature branching, merging regularly and praying that no one breaks the master branch. Sadly.
There's no time to NOT write tests! Failing to write tests is an extremely selfish act, forcing your technical debt onto the shoulders of your successors. Don't let your name become a curse word because you *will* live on in infamy in the commit logs.
This is us and atm I hold that opinion. We are 3 people handling multiple bigger apps. One standalone and atm 3 build out of a in house framework which share 80plus packages and are modular. We built it ourselves and at at planing stage 5 years ago also decided that we cannot afford it. Would you try to convince me here? I am very curious about that especially as the new one of us now writes test for his stuff.
@@Chiramisudo that's an invalid point. have you ever took loan? mortgage? tech debt is a very similar thing. you get what you want now, and pay for it later. and pay more. why you want to pay more? cause you made a deliberate choice: having a thing now is more important, than some extra money in future. so having a tech debt might be a very reasonable thing. But you must control it. Same as extreme monthly payments on all loans will crush your budget
@@krivdaa9627 A poor analogy. With a mortgage, it is YOU who is responsible to pay the debt and not your successors. The ONLY justification, in my mind, is when the company will literally go bankrupt and cease to exist in its current form because it failed to deliver a product before running out of funds. Maintainable (readable, testable, etc.) code is THAT important.
What about a situation where none of the methods seems to work well: You need to make a fundamental architectural change to your code. Maybe some central module in the code requires completely different approach to it. Refactoring would take 10x the time or simply rewriting it. Refactoring can be done in small steps but would be extremely slow in this case. Complete redesign and rewrite would be the much faster way but you would need to touch lots of areas in the whole codebase to make the change and you can't commit the changes before every part of the code has been changed to use the new module. Thus it sounds like a "one man job" while others aren't allowed to touch the code base at all. A tricky situation. Any suggestions for times like that?
Why are you claiming that refactoring would be extremely slow? Being able to make changes in small completely working steps is ideal. You can quickly integrate each of the changes, and move on to the next one with confidence. If the hold up really the refactoring, or is the hold up a slow release cycle that is throttling your integration to one step every couple weeks? Doing a complete redesign is almost always actually slower. People who claim it is "faster" to do a high risk rewrite are usually just counting the time to write the first draft. The cost of a change isn't just the time to draft the new code, but to test it, and go through all of the debugging cycles to fix the regression issues.
@@thatoneuser8600 The hypothetical we are working under said the refactoring could be done in pieces. So the commit message should state which piece you actually did and why..
The problem with a massive change in one hit is that it is almost impossible for people to effectively review; the review cycle alone may span weeks, by which time the branch is stale and you probably need to fix conflicts.....and that's when the bugs creep in. Much better to split the change into smaller tasks which the rest of the team can keep up with. It's fair to say that it *is* more work overall, but the end result is more likely to be better quality. I've been there and done it both ways. For me, velocity trumps everything; stale branches are the enemy.
This would create a chaotic trunk history. Where you would rebase on a feature branch to simplify history, doing so on the common trunk is nearly impossible with a distributed team. It also makes pushing code to the origin more of a headache. I recently had to work in an environment where the machine hosting the local repository was unreliable, and pushing to the central repository was the only way to back up. Using your approach would have meant that I would need to push incomplete versions of my features to the trunk just for a backup (or create a temporary feature backup branch, which seems antithetical). Lastly, your whole framework seems to be heavily reliant on timeline. It might make sense for a one-day feature, but if that feature suddenly grows into a multiple day task, then you have to worry about finishing quickly just so you don't start lagging behind the current truth (whereas with a feature branch I would simply pull master and rebase my feature branch to refresh the lag from the truth I was working with). Eventually you might give in and truly make it a feature branch, then revert your master and pull the latest before reading the new feature branch, if you deem it is out of scope for a single task. This is an arbitrary, self-imposed limitation that almost acts as a punishment to estimation errors (which are prevalent). I think simplicity has its appeals, but ultimately trying to conform to some theoretical goals while ignoring the practicality leads to issues like those I mentioned. Git-flow has issues, but dev teams should use it as an inspiration for a workflow that better suits their needs. To me, focusing on improving testing, beefing up CI and deployment robustness are all more interesting than striving to adhere to some theoretical metrics.
I think his mentioned approach only works when the team member are evenly competent. Whatever team size it is, if you have a couple of intern devs inside you team, things could go wrong soon. I worked in a project with aroung 50 devs across many countries and competency (some are from short-term outsourcing companies), if we used his approach should have been nightmare too.
My 2 cent is, frameworks and libraries are mere tools to developers. We use them the way we see fit to get the best out of them in particular use cases. We are their masters, not their slaves
that's not event the worst case scenario! The worst is "Dude, please __unmegre__ / countercommit all you intermediate commits form the common trunk - you feature is getting postponed for some reasons for several month". The second shitpile is "how to review the code?". The feature branches model gets a perfectly good answer on that: reviews are done on pull-requests. ... and if (when) you want to intergrade your code.. just rebase on master - and run tests. When your are ready to to be included in release - rebase feature on release and run all required tests. the guy simply makes HIS work easy at the cost of introducing the hell-on-wheels to the Dev side
@@krivdaa9627 you are not getting the point. If you develop the same way you are developing today, yes, it is going to be a mess. In continuous deployment you separate deployment from release, e.g. by branch by abstraction or feature toggling. You have to use different techniques, but you gain a lot. Open up your mind and try. I would never want to go back to feature branch hell, stop the world releases, ...
What if some of your tests are bad (sh... happens) but a huge logic bug is hidden and was hidden for two months and it is so complex that can't be fixed in a day or even in a month in the current version because all of the latest features were based on the part of code infested by that bug? Users have noticed that bug and you just have to rollback production to one of the previous releases where it can be fixed in minutes. How to deal with this situation without branching? Maybe this CI/CD flow is based on the "Happy path" assumption?
Thanks for the video. Quick Q: How do you handle Code Reviews? In my experience with after-the-fact reviews teammates tend to forget or get lazy. With feature branches you can add checks to enforce reviews in pull requests. I'm not talking about catching failing builds, but rather knowledge transfer for new devs or mentoring for juniors. Thank you.
@@ApodyktycznyCzlek depends, the goal is to remove the need of merge request because when you collaborate with half of the team for given task the merge requests become unnecessary, because almost all the things caught in review would be corrected while pair programing. Of course you won't get there overnight, but when you will start to notice less and less comments in the code review then you can start shifting to only pair programming, without code review (or code review on demand)
Gitflow, my old nemesis. I think that I have had more discussions, in my past jobs, on interpreting Gitflow operation as opposed to discussions about Gitflow projects. It's really just a waterfall-based project management tool (in my mind) which makes it a bad tool for CI/CD anyways. Good discussion! Cheers,
Oh boy, you really didn't understand how it works, did you? Otherwise you would know you talk bullshit. GitFlow is great, and it is agile. Feature flags are cancer, and trunk development is something we were doing 25 years ago. And Subversion was perfectly fine with that. Do not speak about stuff you don't know.
I used Martin Fowlers excellent guide to branching patterns some 20 years ago to set up software development processes using RCS, PRCS and then later SVN. The branching patterns you use follow the kind of software you develop and the way you want to organize your team, so there is no "one solution fits it all" (as so often).
Hmmm, Normally I watch these videos and nod along as the suggestions/ideas match my own experience or don't seem particularly contentious, but this is the first time in a while you have really given me something to chew on. I had a pretty visceral defensive reaction against this one and I think I have to go and figure out why and revisit my assumptions. Thanks for keeping things interesting :)
I was also nodding along up to the point where 'bad automated testing' was the reason for needing the production branch. We keep one 'main' branch (aka. master, dev) that has passed automated unit and integration tests. We have customer acceptance and regional regulatory compliance requirements that must pass before dumping our changes out the door, however. Maybe that means we can never do CI/CD? It's also not clear to me how you'd easily conduct A-B tests; maybe project fork and parallel project that deploys behind a load balancer? I'm sure we could make things complicated enough to solve any problems. I feel like GitLab Flow is a closer fit for our workflow, but I should probably revisit my opinions and assumptions, too.
So far I have enjoyed reading the comments. This channel has been attracting knowledge people. I wonder if there is a discord channel where people could extend some of the discussions started here.
Literally this comes a month after I suggest that in replacing our old versioning approach with Git, that we should work off of trunk alone... an idea which was thoroughly shot down in favour of the GitFlow approach. At least we're now only a decade behind the standard, instead of 2 decades behind...
Gitflow is best when someone needs to make a complex change. But - and this is what Dave leaves out - gitflow requires someone (team tech lead usually) to be aware of what people are doing. That's often left out of the discussion, but it is the most important thing of all.
10:37 Committing to local master then finally push to central master VS using a branch then finally merge to local then push to central master: they are the same, the result (central master) is identical. With GIT, branching is extremly cheap. If I use SVN: I never make a branch. GIT: never think before branching. Inserting some temporary debug logs to code? Make a short-term branch for it! Pass some half-baked-but-working stuff to a colleague for demo? Fork a disposable branch!
@@andrealaforgia When I say "result" I mean the whole process. Local changes VS private branches is only a "technical difference" (local changes are dangerous, the changes will be stored on the local machine only, not in the central repository)
Git is for people with anxiety. Branching is cheap but the mental context and maintaining where you are is not. You are really giving yourself more work in the end, in the name of feeling "safe"
The inability to review and reject changes to develop BEFORE they get there alone is enough reason that this whole TBD approach should be a nonstarter for almost any non-trivial project. This guy is just so upset that SVN fell out of favor that he is trying to get everyone to use git like it is SVN lol.
I've done this to nearly all my projects without knowing about this, simply because it made sense. It's nice to know that I'm not the only one who came up with this idea.
@@RayZde Stability and not forcing your customers to have to use one specific version. I've had too many software products where one version won't work, but the previous version and next version do. I've also used a bunch of software where they provide free bug fixes for the life of the current major release, but they charge for major updates. It seems rather difficult to do both of those things if you're not branching. I realize that it's fashionable these days to not know the difference between major, minor and bug fix releases, but it is rather important if you can't guarantee that everybody is going to update to a newer version, or you're charging for major updates. Sometimes a major update means that the hardware that worked for the previous version just can't be supported, but you can't/don't want to leave that software unpatched because there's still significant numbers of people using it.
Your comments at 14:42 resonated with me. I've, thus far, stopped short of CI and instead used small frequently merged feature branches, but you've convinced me to try proper CI. Thank you.
We've used a variation of gitflow when multiple concurrent versions (sometimes major) of the software need to be maintained. Nowadays in those scenarios, when it's really necessary to maintain multiple versions, I suppose I'd recommend multiple CI branches.
@@andrealaforgia5066 It's not. When you have a Product that has several versions used by customers at the same time, you need several CI pipelines. Consider Spring Framework project, it has to maintain versions 5.x.x and 4.x.x (and maybe some more minor versions) and they work on 6.x.x currently. Then certainly have several CI pipelines for every release branch that is alive and one for the mainline. However, when you have a Project or a Product that is served as a Service (i.e. you do not ship your product to multiple customers), and when you maintain just one single version with CI/CD pipeline, then it is different, you need just one CI/CD pipeline.
@@andrealaforgia5066 Under release branch I assume a live legacy branch of a Product where there are still customers using it. If critical bugs or security issues are discovered in such a branch, they need to be fixed. And CI pipeline is needed for such branch to verify that a bug fix or security issue fix does not break anything. So you need a CI pipeline for every live release branch (that still has customers using it). Of course, you can delay branch creation till bug is found in it, and create a branch from tag when the bug is found. But once branch is created to fix a bug, you need CI pipeline attached to it. Verifying if a bug fix did not break anything on developer workstation is little scary for medium to large systems .
@@miletacekovic >Of course, you can delay branch creation till bug is found in it, and create a branch from tag when the bug is found. But once branch is created to fix a bug, you need CI pipeline attached to it. It's not a CI pipeline, it's a build pipeline. It's different. CI means something specific: Continuous Integration. You don't do Continuous Integration on the releasae branches, you keep them for hotfixes. In general, however, keeping a release branch for every customer, assuming that you have hundreds of customers, is suicidal, a good recipe for disaster. You cannot really expect to have to hotfix a bug on hundreds of branches. You will need to make those customers converge into a new release at some point. >Verifying if a bug fix did not break anything on developer workstation is little scary for medium to large systems . What developer workstation? Who has ever talking about developer workstations? Developers' workstations are temporary workbenchs. CI is about integrating developers' work into a shared mainline multiple times a day. Tests run on the mainline.
@@andrealaforgia OK, you agreed you need build pipeline on the release branch (ok, call it build pipeline, as tens of developers are probably not fixing bugs on a single release branch, sure). But that build pipeline is basically the same as CI pipeline on the mainline, it cannot be different. It has to contain the very same tests as CI pipeline attached to mainline (including unit/integration/e2e/performance/contract/whatever you have), otherwise, we cannot be sure that nothing is broken with a bug fix. Furthermore, this build pipeline has to run on CI infrastructure, not on developer workstation. So everything here is the same as in the CI pipeline in the mainline, except that it runs on a code from the release branch, so at the end of the day, calling it differently is maybe not justified. > You cannot really expect to have to hotfix a bug on hundreds of branches. Sure not hundreds, but dozen of live release branches on a successful product is not uncommon. > Tests run on the mainline. No, tests run everywhere: developer workstations, CI pipeline on mainline and of course on pipelines on every live release branch.
@@miletacekovic You are not doing continuous integration on the release branch. Therefore you cannot call the build for that release branch a "CI pipeline". You are fixing bugs on that release branch, you are not continuously integrating new development. That bug-fixing activity causes frustration among your developers, rest assured, given that they have to apply the same fixes in multiple places, with all the problems that that practice entails. If you have several bugs, discovered for multiple client's version, you need to multiply that bug-fixing activity for all those branches, increasing frustration and fear of mistakes. The idea that you can keep release branches open indefinitely is not a sustainable model. It doesn't really work anywhere. You will need, at some point, to make your release branch converge into master again or you are doomed to eternal sadness. Stop calling it "CI pipeline". CI happens *ONLY* on the shared mainline of development, nowhere else. You are talking about separate builds that happen on the CI server. It's not a "CI pipeline".
@@Jheaff1 It compares to situation where changes would be kept in the local copy just a bit longer before pushing to origin. Probably not a big deal if reviews roll smoothly in the team. Can definitely be an issue, if not...
Thanks for sharing your valuable insights. I think fundamentally, a highly experienced team would have no issues with adopting such practices. When adding lesser experienced engineers to the mix , and a lack of available senior engineers, things can go horribly wrong. If the code requires some refactoring, it's gonna hurt. Would love to see some real-life examples to compliment your insights... That would go a long way. Happy to discuss further.
Not really, some teams practice agile that way, but the best teams that I have seen don't. Even at the detail level the approach is collaborative and iterative. For example, on the teams that I worked on, POs would sit with the devs and would see the software evolve as it was developed, if at any point we had a question about the requirements, or they didn't like the direction we were taking, we'd talk about it. Testers tested the software while it was being developed, not after development was finished. So really not anything like, even a mini, waterfall at all.
@@ContinuousDelivery Yes my original comment has been on purpose facetious. However it looks like you are missing the bigger picture here, all the tools that we have in development, CI, CD, unit- testing, agile, XP, V, etc., are not about methodology in principle. They are about automating or at least formalising communication, and responsibilities from the realisation that any work done has dependencies on previous work and all this should be mapped out to a workflow, otherwise you are just hacking around, which is nothing to be ashamed of, just all parties need to be aware of that. The tools we have help with workflow, which tools we use depends on the particular task and its environment at hand. Doing trunk based CI/CD development when you are creating a prototype to confirm viability is wasting resources. Doing gitflow if you don't need to maintain multiple stable releases is also a waste. Doing agile if you have limited access to the project owners (which must include the end user) but are still held accountable to a timeline is a recipe for failure. Not being able to comprehend the overall picture but regardless advocating for a specific methodology is rather naive. I am not saying that you are doing that, but just that I can't observe any evidence that excludes that. Having said al that, I do enjoy watching your videos and on multiple occasions they have given me the inspiration to think more deeply about what I am doing.
The way you're describing your process seems awesome, however, it would require a test suite that is reliable, deterministic, and fully local. If you have to wait for a set of tests to run on a Jenkins machine, then you have to wait too long, and figure out who broke the build. since you can't unit test everything(sometimes you need integration tests), how do you solve that hurdle?
The answer is: mix and match. Have multiple test suites: one of which is fast and covers as much as possible which can be run before push, and then put slower tests in a CI server like Jenkins. Those tests do involve waiting, and sometimes you do need to figure out who broke the build, but it's much rarer, and a worthwhile tradeoff. Where possible, when you start getting classes of failure in the slow tests, try to find a way to surface them in the fast tests instead. Over time the compromise becomes less of a compromise :)
CI encourages fast feedback, unit testing should be able to give you 80%+ confidence that everything is ok. You really shouldn't rely too heavily on integration testing as it's more complex, less reliable, less helpful and too late for things that an IDE or unit test would catch. I like only smoke tests to check basic connectivity e2e. Unit testing to expected consumer and producer contracts is better IMO. Broken contracts is a management rather than development issue.
@@matthewlothian5865 I've seen enough silly changes to (all kinds of) tests made by developers to have learned not to trust tests to reveal issues from other developers
In my experience, having successful CI is 100% dependent on having a reliable test suite that the team is committed to using and maintaining. If you don't have this yet, I would recommend focusing on your test suite first and CI second.
@@defeqel6537 Whatever approach you pick, if the developers either don't understand it, or don't copy with it, they will break it. I think what you are saying is that if you are working with bad developers, you need to make them better. There is no process or technical fix that will correct this, this is a cultural change. You don't get to build good software with bad developers, so make the developers better, whatever that takes. I am trying to do that by explaining the techniques that the best dev teams use. (P.S. by "bad developer" I mean people who don't do a good job, not "bad people", in my experience it is easy, or at least possible, to help "bad developers" do better).
Commiting to master, several times and ensuring that each commit is stable sounds easy enough to execute, then making a pull/merge request (squash commits?) to make a single commit on origin/master seems like a reasonable approach. But I'm afraid that this seems viable just for solo developers, after all Git was created with working with many people at the same time. I'm afraid that, the lack of branches will produce a chaotic git log, and probably will make working with many people a nightmare. How do ensure that all people involved in development have a high sense of discipline to keep their changes not just releasable, but stable on every single commit? This doesn't seem like a easy change to make in a large project with many people involved.
Try it and see. I've found it really useful and it's the approach the team I work on take. It's just less context switching and messing and we can simply look at the repo to see the latest code.
I worked with 30 devs across Singapore, the UK, and the Western US - all sharing the same big codebase. We managed to work together closely and in all of four years we hardly ever needed to use a branch. We all shared the same codebase - with common ownership (i.e. anybody can change anything.) No need to "fear" anything - you just have to learn the XP way of working in a co-ordinated fashion. Branches are no substitute for working closely with other people. Now I know lots of people fear doing that and don't want to face the possibility of it - but it does actually work. My "nightmare" is not being able to get rapid feedback about things on separate branches working together - that totally kills my ability to refactor and simplify things. The code becomes very, very hard to change- and very quickly. Working with my 30 colleagues on a trunk is a lot easier because catching up quickly with changes - and learning to make small commits makes it much easier to refactor complexity away.
@@dafyddrees2287 Were you all working behind feature flags? How do you release feature #1 but not feature #2 when you have everything in master? Also how do you hotfix production when production is not in its own branch?
@@Keilnoth Feature flags are a bit of a worst case scenario because we don’t want a combinatorial explosion of switches undermining the usefulness of tests on a CI server. The trick is to build things in the order you want to release them and release very often. We did branch for hotfixes - at that point you are maintaining two different versions of the app anyway. We almost never needed a hot fix though - it happened very rarely, like once a year.
I think more than anything, branches and git-flow are more crucial to the project management side of things, I found that having branches with names that may correspond to a JIRA ticket code for example is very practical and easier to audit for a PM or Team Lead for example. It's always about what is practical for your organisation.
your point is great, but i'll dare to push it even further. feature branches allow you to make pull requests! and THEY are practically crucial for management
I have to ask, when is the "CI result" supposed to hit the end user/ system? What system is there out there, where the software gets updated many times a day? I don't know one "end user" of software that gets all updates at the moment they are made and deemed safe and deployable. So, if we can agree, there has to be a split out of deployment updates to happen at different, less often times than CI happens, like at a minimum several days, then we can understand Gitflow better and how it can work with CI. This release cycle is where the bundling of the updates that were "CIed" are pushed out to the "users". In gitflow, that is the move from dev to master. So, CI happens in dev. dev is the "current version". Master is the version pushed to end users (and thus behind the current version almost always). So, to me, Gitflow makes perfect sense with CI too, where dev is the CI'ed branch. The other thing I am missing is the "mistakes" that might be made. Sure, the end use of the program is the feedback, but again, you can't afford to have users continuously stopping their business work, to test the changes in production. Usually, you'd have a stage set up, where you'd ask them to test in. Usually it is in sync with the dev branch. Or, there might even be a QA branch. Branches are hiding changes. They are copies. And they can easily be updated to match dev (which is a common practice too). So, I'm not buying this. I think a CI straight into end user systems never happens or rather is a rare animal, thus the premise of the discussion is wrong. I don't get daily updates on my Windows machine. I don't get daily updates of OSS software I use. And, I don't get daily updates of my cell phone's OS. Etc. Etc.
@@1oglop1 Are you using feature branches that depend on other feature branches? The way I understand feature branches is that new features are based on master and feature branch is merged into master as soon as it's considered final code (note that the feature may not be complete but the code that far is considered good enough to take responsibility).
This was well presented. I do, however, notice that anti-gitflow and pro-trunk discussions often give very little treatment to variations in developer quality and experience and how to deal with them humanely. Also, requirements from other departments and customer driven priorites (ie. bugs and pilot features) are seldom linear in nature or time, in contrast to the commit log in git. No amount of software or automation can adequately replace team members actually communicating with each other. So CI/CD, in my view, can never be the silver bullet to solve all dev issues. Process is more important than software. While gitflow has its faults, and has reached its sell-by date, it was a godsend a decade ago when most teams were still battling to understand git itself, let alone how to actually manage code with it. One thing I do completely agree with, though, are the statements about feedback cycle and its importance. But that was true even before the advent of gitflow.
I have an issue with trunk based workflow. How to collaborate on more exploratory and larger features with multiple people, while development on main trunk goes on. I would branch from branches, merge commits from other branches, and visualize it all in branches. When it comes time to merge our feature, we can boil it down to a few self contained and more-easily reviewed commits.
Thankyou so much. It's really useful to have a place I can refer people that mandate gitflow. (I had to revise this comment several times to remove swearing.)
I just do what makes sense for each individual project. I generally have two branches: in-progress and stable. In-progress for things that aren't ready for release, and stable for things that can be shipped to the user. But every project is different - different scale, different team, different goal. And what works for your project will probably need more thought than a 15 minute video can determine by itself.
@@harleyquinn8202 Because a lot of the time, people (like myself) download and compile the source code directly from the repo expecting it to work, and if it's not at a point where it's fully functional or even compiles, that's pretty disapointing. Anyone who uses Arch Linux is familiar with git packages, where installing an application or library does exactly that; download and compile it locally before installing it, rather than using a pre-compiled binary.
it sounds good in theory, but it's not easy in practice (ie. juniors, unmotivated people, culture issues). I like github flow with a ci/cd spirit. use a branch to write your code, but it's encouraged to merge 'incomplete' features as soon as possible... at least it gives you the right mindset when it works well, and it naturally falls back to traditional github flow when it doesn't!
It is astounding how so much of the history of software engineering is focused around **re-discovery of the past** in the sense that things that were simple once but got murdered by senseless addition of useless complexity, are now being revisited and reconsidered as the best way of doing things, but with some reticence, mostly towards seeming... "old" or... "conservative". I call that BS. It's just ego and closemindedness. Probably mostly enforced by corporations... Thankfully programmers are generally a smart bunch and will eventually find the best solution, and channels like Continuous Delivery do help a lot to fast forward that evolution.
I'm using gitflow in the current project. For some reason (you know, legacy, no tests, etc.), we can't switch to the proposed method (and CI in general) yet, but we're aiming to. And I have to say that gitflow is great comparing to the lack of any process, where everyone was merging something and at the 'release day' features we needed were cherry-picked to production with constant reverts because of bugs. After introducing gitflow (although not perfect) we can finally take a breath. So I agree with everything you said except the title. It's not ALWAYS a bad idea, sometimes it's a step forward.
You'd be surprised how easy it is to shift to TBD from that state. The code has been tested in production. You don't need high test average of the existing system to switch. All you need to do is have good testing for every change going forward. You commit to "we will never push untested code again!" When I've helped development teams in this situation, we've been able to transition their legacy code in weeks.
@@BryanFinster I also transformed one of the projects as you said, however, this one is quite unique. It'd sound strange but we just can't test some of the changes automatically and be sure that they'll work as expected, even on testing environments etc. On the other hand - the system handles thousands of requests per second and in the current state releasing changes multiple times per day is quite expensive. All I wanted to say above is that gitflow is not bad. There are many things to improve in my case and this way of working is one of the less important to change I believe.
@@comodsuda what we found was that solving for this required improving many other things that improved the overall ability to deliver. It acts as a constructive constraint to uncover problems we are numb to. I empathize with the legacy issue. There’s quite a bit more involved than “just don’t branch” when you’re dealing with a multi-team 25 million line monolith made up of 2 decades of untested code. We decided to methodically re-architect to improve our ability to deliver. It takes time, but there is payoff for the org and the teams.
I don't buy "everyone do this" narratives. TBD is a good practice, but it is not a universal "everyone do this" practice. Open source projects and many internal teams use gitflow very effectively. It often is best. It depends. Beware of claims that there is only one "best way".
We mainly use the feature branch and the develop branch for creating features, however we use release for end-to-end testing. Such as load testing and full functional testing, going through all the quality checks. Integration testing, unit testing and vulnerability scanning happens on all branches. But personally I prefer only having a master branch and multiple features.
I definitely like that this channel publishes thought provoking ideas. But these ideas are in a bigger context. I've seen many code bases that if they just pull in the advice from this video they will break their whole flow and not understand where it went wrong. Things I think you need to do before adopting this idea: 1. Have several suites of unit, integration and e2e tests. 2. Have a feature flag oriented approach. - Here is where automated and manual testing is dependent on 3. Avoid refactoring. The context would be that you need replace a certain library that you didn't implement abstractions upon ( ex: using directly components from libraries that after some time get depreacted, happened in Java, Angular, React). For that you would need to reach a code freeze moment so people won't use the old library). Take for example hotfixes branch. You develop the hotfix, how does the tester test the hotfix ?? Do you merge it directly in ? No you have tags in production. Meaning that the tags are stable the in between tags are not automatically considered stable.
I totally agree. Gitflow is antithetical to actual CI. I have tried to change many teams’ process but it never, ever works. People agree what I suggest would be better, but I can’t get around the organizational inertia.
Yep. The project manager, business analyst and project owner all have to be on their game to support such a workflow just as much as the development team needs to be.
If you're not doing pair programming which I know you're a big proponent of, how would you reconcile a "pull request" type workflow without (even small) feature branches?
the benefit of pair programming would be the extra eyes to review. Even with that, the team lead is usually responsible for approving the PRs from my experience. Without pair programming, I would expect the team lead to be doing reviews...as well as other members looking over the PRs to help catch things as well.
Mature testing and feedback loops. If it builds and passes tests its good, refactoring can still be done in another iteration. This is a cultural thing a team will need to get used to. With this in mind it's crucial to make sure code is easily testable (TDD can help) and maintainable (Loosly coupled, highly cohesive, modules) as iterative refactoring is expected and encouraged. There a many design patterns and principles that can help keep an application refactor ready.
This is my biggest problem with the video (or more specifically his videos against branching). If nothing else, feature branches provide a workspace where developers can back up unfinished work or screw around making changes that don't necessarily compile at any given time and such. I think CI purists can go too far encouraging everyone to commit line by line to master. Not every line of code is an immediate improvement to the underlying system without extensive additional work, testing, etc. Nobody's going to commit a multi-year update to a missile guidance system directly to master, even locally. The feature branch is where the feature lives while it's being tested, reviewed, etc. Not every change is a one-line CSS update from 12 to 14 point font on someone's personal web page. I realize he did propose 1 day as the threshold to decide what gets a feature branch, but given that feature branching is so trivial and cheap and offers lots of practical organizational benefits, I just don't see a case for not using feature branches on anything but the least consequential projects. Also, as much as I hate things like hot fixes and different tracks (master, dev, beta, etc.), there are practical reasons why these are sometimes necessary, such as supporting a one-off customer with a security vulnerability stuck on an older version, or regional regulations that effectively demand different versions of the software. That's stuff that CI/CD purists can't really hand-wave away. I think the principles are extremely important and practical, but I get tired of hearing CI/CD evangelists describe every software project like it's a static web page or a small API, when my whole career has been spent on systems that take a full day just to test, review, and merge, all after the changes are considered finished by the developer.
@@davidboeger6766 CI as a practice is not for every project, no silver bullets. Without the culture and enabling organisation / architecture it will be difficult. The main goal of CI IMO is to restrict a freedom of delivery in order to simplify and streamline the process. The restricted freedom is this principle "There is only one working version of the software at any time". This makes reasoning about many other parts of delivery much simpler (but maybe not suitable for your org). Everyting is an iteration. CI can be a difficult paradigme shift, similar to waterfall -> scrum, imperative -> functional, monolith -> microservice, branching -> trunk based
@Peter Brown Hmm... I'm not sure agile means what you think it means. To me, agile has the same steps as waterfall, however, you design very small features and implement them rather than designing the whole system upfront then coding the whole system.
I agree with you to a large extend. However, I do see a point in having a development branch (the CI branch) and a master branch (the production branch). I work on embedded systems (in particular, in the automotive industry [on e-drive control]), where we have software tests, hardware-in-the-loop (HIL) tests, and finally fully integrated tests on an assembled e-motor. So for day to day development, I agree, it's best to have one CI branch where everyone commits to. Software tests (unit + integration tests) can be done automated for each commit. That works great! However, in the automotive sector, you also have HIL tests, where you have a very limited number of HIL devices. A set of tests takes a few hours; so, doing this for every commit on the CI branch is often not realistic. It's even worse for the final tests, they take much longer. As a result, it is useful to have a temporary release branch (like in git-flow) where you do those tests at the end of a sprint. When all tests pass, then that version is committed to the production branch (like in git-flow), where all the other departments can get always the latest stable version. This production branch has one advantage (over just a tag on the CI branch): Clients or members of other departments always have the latest tested/stable version. This gets particularly important because they are not always good with version control. Regarding synchronisation between the production and the CI branch, I agree that git-flow does it wrong. Any code change should only be done in the CI branch. Hotfix branches are a big no-no. IMHO, there should be only one direction on how commits come into the production branch -- always from the CI branch. Then, you don't have a problem with diverting branches.
In your case, I think the only limitation is that your code will be releasable only after passing all those tests. But that doesn't prevent you from using a single branch for continuous integration. The changes can go in as switched-off features and be switched on only in the test environment.
An interesting idea which, like everything else about continuous delivery, is completely wrong. Does Toyota change their manufacturing line every day? Do they change their suppliers of components every day? Of course they do not. They make minor changes ("hot fix") only when necessary, they make significant changes only once or twice a year ("minor release/model year"), and they make major changes only every few years ("major release/generation"). If you wanted to try to translate continuous delivery to the automobile industry, it would mean every car is built differently, with no regard to interchangeable parts, and you'd have to recall every car whenever something went out of date.
I agree that Vincent's statement was respectful. I remember reading Jimmy Bogard-the creator of C#'s Automapper-blog about when to not use Automapper. Having the creator's candid input is very insightful and useful to stop bad or smelly practices.
I think continuous integration is a good idea, but I think pushing directly to origin/main (or origin/master) is a bad idea. My preferred way of working is to split backlog items / user stories into small (mostly) atomic tasks that aim to introduce one small addition. When starting a task we create a task branch, that is short lived. When we are ready to integrate we create a pull request and another member of the team peer-reviews the task. I don't care how senior or seasoned a developer is, nobody pushes directory to main. All developers are human and everyone makes mistakes. By peer-reviewing every single addition to the code base we catch these small mistakes early. When the team works at full speed each developer can still implement multiple tasks in a day all the while reviewing tasks from other developers. The added benefit of this is that you get to read other people's code daily. That is a great way to learn. Maybe someone knows a nifty trick to tackle a certain problem. When you get to read this code then you learn this nifty trick too. Reviewing is not just about finding mistakes it is also a great way to spread knowledge.
>I think continuous integration is a good idea, but I think pushing directly to origin/main (or origin/master) is a bad idea. That's a contradiction :) It's not CI if you don't push directly to the main branch of development multiple times a day. Note that CI and trunk-based development are the same thing. >My preferred way of working is to split backlog items / user stories into small (mostly) atomic tasks that aim to introduce one small addition. That's great. >All developers are human and everyone makes mistakes. By peer-reviewing every single addition to the code base we catch these small mistakes early. Sure, and that's why CI is not removing the benefit of code reviews from the picture. It's only advocating a different way of reviewing code, through continuous code reviews that happen *while* developing, and not at the end. There are various disadvantages of having PRs at the end of development phases: it's extremely hard for a reviewer that has not been involved in the development of a feature to get a good understanding of what the code does. You haven't seen it working live, you only have a bunch of files to statically analyse. The risk is that reviewers only skim through the files for a superficial validation, trusting the creator of the PR (especially if she/he is a senior member of the team who knows the system well) and coming up with a "LGTM". This is were PRs can become really dangerous tools. It is much better to use pair/mob programming and continuously review the code while working on it. >The added benefit of this is that you get to read other people's code daily. Is that a benefit? Having to stop your development activities to read other people's code of which you know very little? >That is a great way to learn. Sure, but learning through collaboration is 10x better.
I has worked well for me to take a break from what I'm doing to look at someone else's work. I gives be am opportunity to step back from what I was doing. It often gives me new ideas or I might realize something that I wouldn't necessarily have though about if I was just doing what I was doing.
Also keeping the diff small helps. And you should always checkout the branch you are reviewing and look around the code, not just the diff. You can try building and running it locally while you're at it.
I have a question, do you do automatic testing after the merge and before allowing customers to use the software? If yes, during the time of testing, what software is served?
Exactly. Great explanation! CI is where Feature Toggling becomes even more important, where maintaining multiple features becomes a matter of a condition within the code, not a branch... Continuous Integration, Continuous Deployment and Delivery on demand (Continuously deploy into production, and toggle features on when ready for delivery to the end user)
"Represents the reality of Software Development" - What reality? Branching isn't complicated or slow and it certainly doesn't prevent continuous feedback. You can always choose to merge other branches into your working (or feature or w/e) branch *at your discretion* . The "reality" is different for each developer, team and organization. Say you have a testing environment that runs in parallel to your production environment so your non-technical stakeholders can provide feedback and are free to experiment themselves. Do you really want to deploy these two different environments from the same branch? If yes, you just made things more *complicated* in the real sense of the word. You are tangling up two things that should be separate and simple. Another reality is that you might have fluent, constant communication in your team and a codebase that allows for separated features, modules and abstractions to be developed independently. You communicate and know in advance that they won't intersect in critical/logical areas, but only in the plumbing. It becomes useful to separate these working items into branches, because merging/coordinating plumbing code is straight forward, but becomes tedious or even inefficient if you need to do it constantly because you don't know yet how to connect the dots before certain parts are finished. So in conclusion, I find this advice useful if modified this way: If you work in small teams, direct communication between developers and other stakeholders is guaranteed, then use the branching strategy that fits your needs AKA "the reality" and don't just follow a predetermined pattern (like git flow) but make it as simple as it can be, but no simpler. Strong conventions and rules can become useful only if you need to context switch between many different teams and projects. Otherwise just use your tools and adapt your processes to your reality.
@@andrealaforgia5066 processes and tools cannot substitute communication and engagement with your coworkers. There is not one size fits all, no silver bullet is what I'm getting at. The beauty of git is that is doesn't inherently prevent you from merging or branching. If you need to branch, then do it, if you need to merge, then do that. It is a highly dynamic system. Using it should be driven by actual needs, not arbitrary rules. Saying that rule/methodology X simplifies things begs the question: Under what circumstance? Simplification is not subjective. It means you are disentangling something that should not be intertwined. The subjective part is the "reality" that you model and work with.
@@clickrush >processes and tools cannot substitute communication and engagement with your coworkers. There is not one size fits all, no silver bullet is what I'm getting at. Again, this is a typical logical fallacy, black&white reasoning. Who ever said that CI is a "silver bullet"? CI is a way of working that has proven to be better than other ways of working to develop software. Period. No one has ever stated, in any books/resources/articles about CI, that CI is a "silver bullet". People keep rejecting CI and trunk-based development putting a lot of emphasis on communication, like communication were the only thing a team needs in order to deliver software. A team needs to be able to continuously integrate their work. That's the point. CI is not substituting communication and engagement with your coworkers. How is a long-lived feature branch approach fostering any communication, given that it's a way to hide your changes and silo your development? Developers adopting feature branches often do not communicate for days and days, only to discover problems at the time of merging their changes. >The beauty of git is that is doesn't inherently prevent you from merging or branching. I don't see that as a "beauty". This video is not about git, it's about GitFlow. It's different. >Saying that rule/methodology X simplifies things begs the question: Under what circumstance? How much do you know about CI, which has been going on for almost 2 decades, and all the studies about it that prove it's the best way to develop software we know so far? Read "Accelerate".
@@andrealaforgia I wasn't arguing against CI generally. I was questioning the notion that one particular way of using git "represents reality" for all, and was giving examples where you make things more complicated if your model doesn't match your circumstances. What may happen if you don't separate work into branches on the VCS level is that you are separating it on the code level. You introduce configuration and (ad-hoc) logic in your code base so you can accommodate staging environments, beta/prototype features and so on. Which means you need to test that code too, which means you blow up your code base just so you can avoid branching. It's a tradeoff. In some cases this is great, in some it isn't. Again, my point is not against CI generally. It is against big claims of how people should use their tools by making statements about "reality" and "best practices". And I didn't want to say this at first because it shouldn't matter, but I don't need to be convinced of simple branching models and CI, I/we actually use CI most of time, probably over 95%, except when we don't. When we need a branch for something then we just branch instead of coming up with a convoluted way of avoiding it.
@@clickrush >I wasn't arguing against CI generally. I was questioning the notion that one particular way of using git "represents reality" for all I see a contradiction there. CI does dictate "one particular way of using your VCS". The definition of CI is "practice of merging all developers' working copies to a shared mainline several times a day" so if you're not questioning CI, you shouldn't be questioning trunk-based development either, cause CI and TBD are the same thing. Nobody is saying that this particular way of using git represents reality for all. What has been said is that if you want to implement CI, you need to give up ways of working that are antithetical to CI, and GitFlow is one of them for the reasons exposed. You are still free not to do CI, though. >What may happen if you don't separate work into branches on the VCS level is that you are separating it on the code level. You introduce configuration and (ad-hoc) logic in your code base so you can accommodate staging environments, beta/prototype features and so on. Which means you need to test that code too, which means you blow up your code base just so you can avoid branching. Absolutely not. Have you actually ever tried trunk-based development + feature toggles? It's much easier than you'd think. When feature toggles are inactive, you can consider the code they hide as not there at all. Separating the code physically (feature branches) offers less benefits than separating it logically (feature toggles). The latter approach at least makes sure that the various streams of development are integrated, the former doesn't, and the longer those branches live, the more they diverge from each other and master, the riskier it becomes to merge them into master. You can switch features on in your specific test environment and do all you want. It's much cleaner and simpler. The ability to integrate work and the ability to test/release features are two different aspects of software development. Note that you say "you blow up your code base just so you can avoid branching". First, you don't blow up at all your code base, quite the contrary. Second: the purpose here is not to avoid branches, but to fulfil the definition of CI. The fact that branches are avoided is a nice side effect.
I have some points which in my oppinion supporting the idea of feature-branches and they are mainly about QA: - Code-Review: A pull request from a feature branch to develop or master can very effectively be reviewed. The reviewer does not need to go through all commits that were made in order to create a feature but only the diff which is present at the end - Testing & Review: If you feature lives on a branch, a Tester / Product Owner can review the version on this branch. If bugs are found or things are missing we do not have that "broken" state on the master but we can fix it on the feature-branch. I think this helps towards having a stable state on master which is always releasable.
Dave is missing many points here. First of all, he is putting "continuous deployability" on a pedestal. In reality, most companies couldn't care less. The ultimate goal is to support the business and most of the time deploying rarely like once a month or quarterly is completely fine. Secondly, he is talking about potential conflicts and having out-of-sync copies of code. If team members are using common sense, these things happen very rarely and are resolved swiftly. In general, we should try to avoid marking tools as "bad idea, period". Both gitflow and Dave's idea of continuous integration are viable strategies with distinct characteristics.
@@andrealaforgia5066 Thanks for sharing your opinion. I would gladly hear more. Since so many people are favoring Dave's approach there has to be something valuable there, even though I cannot see it yet. a) 100% agreed that teams should integrate their work often. I am using gitflow, and everybody is integrating their work often(small PRs => short-lived feature branches + every PR is build/tested before merging to develop). It is hard for me to imagine, how giving up feature branches is better. I am happy to learn though. b) I may have just not experienced the problems you mentioned. By common sense, I mean stuff like talking to each other and recognizing that if you are working on this module, I will just do something else in a different part of the code. If there is a shared piece, maybe let's pair program a common part first. Again, I cannot imagine, how such an approach would leave to any substantial problems Dave is mentioning.
I still disagree that GitFlow is incompatible with CI. It may be incompatible with CD, but I don't really thing it's a bad thing. Not every company needs CD and far to many companies trying to have CD when they don't really need it. On the other hand I would prefer GitHub flow
Not if your test harness on master branch runs for 16h+ (SW+HW simulations). Just imagine running all tests on all hardware platforms for Linux (quite successful 30yo project) after every single commit. CI/CD is OK for small, local teams (feature branch maybe?).
This week I had the opportunity to start testing trunk based development with my team. Thank you for the valuable information.
3 роки тому+3
How do you feel about CI or even CD in open source Projects? How can you organize and achieve it there? What about validated environments like heath related businesses (pharma, hospitals)? Here each released and used version needs to be validated (sometimes even by outside parties). How would CI / CD work here? Would love to hear you input on these!
GitHub actions? Travis CI? Many open source projects have integrated CI, with CI build state badges, some even with Code Coverage, Static Code quality analysis, Static Code security checks, dependency checks... all free for Open Source projects.
3 роки тому
@@miletacekovic I know about the software solutions for Automated pipelines. These are tools to help facilitate CI/CD. They are not continuous integration itself. I was not talking about the technical aspect for open source. But usually open source projects get contributions by being forked and then having a pull request accepted. And, if you saw the video, this is not true continuous integration (CI), since it is basically creating feature branches. That is what my question is directed at. How do you organize it with many distributed people. Or even harder in my opinion in validated environments.
@ Simple branches with pull requests are fine in that case, when you objectively cannot organize pair programming and must do pear review. But then pull requests are better merged into main, no need for Master and Develop and all that complexity.
Interesting, any thoughts on how to manage auditing as part of CI? We have get peer and independent review of each feature branch prior to merge, and audit those reviews prior to release (random sample testing etc). We maintain multiple production versions (mostly due to deployments air gapped), so I can't even approach CD, but do see CI as a better concept for developing at a higher cadance.
If you can get the audit department to accept the CI server and CD pipeline as good enough, you can do trunk based. Pair programming is great for review, but sometimes that's not accepted because it's hard to audit. If you need the source control system to show a log of reviews, then you can use very tiny feature branches. Basically, the branch should be open only very shortly, for a very small change. This way you can still integrate multiple times per day. That's at least how we solve it in an audit heavy world. Also in CD we are including a risk based change approval flow, connected to the service management tools, which sometimes requires an approval before getting deployed to production. The product owner then gets notified via email and has to approve. Risk is determined by types of change and change sizes and such.
Hello Dave, thank you for the video. The idea is quite clear, but I'd like to ask a question to clarify one thing. If we have a new application to build and it will take months before an MVP release is ready. And of course we yet have to write any Integration/UI tests before we even can start doing CI/CD. What branching strategy to choose?
I choose Continuous Integration, sometimes called (Trunk Based Development). The really important thing in this phase is to spot problems as quickly as possible, because in this phase you will make a lot of mistakes as you explore the problem, and your solution. So CI is EVEN MORE VALUABLE at this time.
I believe feature branch is a branching strategy within your local development, some how ppl tend to push this feature branch to origin and never remove it after merging to develop branch. Even feature branch push to origin server as backup purpose, it must be housekeep and remove after project/CR finish. I think most of the developer not quite get use to this distributed concept, and practicing it like the old day client-server approach, eventually every local git is also consider a server node.
Push origin master? How do you handle code reviews and ensuring quality? "It works" is a dangerously nebulous term... It compiles? Great. But does it actually _work_? And if it doesn't, what then?
Git Flow is great. Instead of having little feature flag turds all up in your source code, the feature flags are feature branches that are only merged into production once they're ready.
Hi, I find the reasoning behind pushing small changes into master convincing in terms of safety and integration, however, I don't understand how I can have a Pull Request if I push my changes directly into master. Isn't this too big of a trade off? Maybe its better to create a feature branch even if its just for 2-3 hours, in order to have PRs?
pair programming, or yes just create a mini feature branch. the idea of CI is not about no branching at all but about commit (merge) frequently but branching just tend to become long live so we want avoid that
Oh man, this is what I've been telling people for years. That flow creates so much unnecessary work, complicates code reviews and leads to many frustrated hours during merges (sometimes making merge impossible)
I am SURE that I am misunderstanding something about Continuous Integration as you describe it now... My question is about development on a non-trivial, wide-reaching, breaking change/feature/spec. HOW do you pull in the current changes from other devs while you're actively changing what they are changing. Won't you be repeated your conversions multiple times a day? Do you need to engineer a SHIPPABLE transitional state as you move toward the new, breaking, end-result?
You don't "pull in" those changes. You and the other devs work on the same codebase (Continuous Integration Trunk-based development). Working on the same codebase and committing micro changesets multiple times a day, you break down work more easily, hiding incomplete features behind feature toggles, and avoid merge hell.
@@andrealaforgia - Yes, unless the Gitflow project is ruled by an "iron fist", it does become a _merge hell_ as more branches are created and changes start to occur on a released product. CI merge change deltas are small so potential merge conflicts are minimal, if any. I think one of the toughest challenges, when moving from waterfall to CI, is in the breaking down of work items into smaller pieces, which requires additional discipline and effort. A single waterfall work item may even be an epic in a CI equivalent... Just my opinion though... Cheers,
Feature Toggles/Flags can isolate changes until they are complete, but teams have to be diligent about maintaining compatibility, using an expand/contract approach, and cleaning toggles up later
Dave's videos are really a great insight to understand the basics of software engineering. I have a few questions after watching this episode. How do we do peer review when working on master branch directly? I understand that pair programming is an effective way to improve code quality, but does that eliminate the need for peer review? Is peer review an overrated concept?
Yes, it eliminates the need for peer review, because you have a constant “peer review” during construction. I have worked in several different regulated industries, all of which required peer-review, pair programming counted as peer review in all of them. The quality of work produced by pair programming is certainly, measurably, higher than code without pairing. I haven’t seen any academic studies of “pair vs peer review” but subjectively, the places where I worked and did pairing built better software than the places where we did peer review.
@@a544jh It is sad that people have been lured into our industry thinking that they don't have to work in a team or interact with people, which is at the core of software development. Ensemble working is the better default approach.
@@a544jh in my experience the problem is that a lot of developers haven't tried it. My experience has been that the majority of devs prefer it once they have tried it, and a small minority, less than 1 in 10, really dislike it.
I cant agree with the title nor some of the content of this video…. Its simply misleading to say that gitflow is bad, since it works for so many teams and devs. In our team we maintain several environments (dev / test / acceptance / master) which each have their own testers. Some of the features (so feature-branches!) get accepted in dev before they go to test and acceptance, while some may be turned down. Similarly, this happens in the acceptance environment before going to production (master). In this case it’s easier to maintain environment branches and the individual feature-branches to eventually merge them in the target branch when it has been tested and accepted by the end-users of the environment branch prior to the target branch…. It’s not easy to explain it in words, but simply saying not to use certain techniques without nuance and ignoring the use cases it may have smells like bad teaching to me!
TBD is more for agile organizations that appreciate fast feedback. GitFlow works better for gatekept waterfall-style and trust-lacking environments like yours, which is fine.
Question...are different branching strategies for different stages in the SDLC? Or are people able to do this CI strategy from first commit? I've mostly worked in git flow houses because the maturity of the developers/project managers aren't there to support single branch workflows. I see how from developer standup, this works and is beneficial...but from experience...how do we get teams up to working this way when they are often new to agile, git, or enterprise software development altogether.
Agreed, if your team is allowed to break the trunk then CI will not work. It can definitely work from the first commit (though I would recommend getting that first commit in quickly as multiple people creating the build scripts/tooling will cause friction).
I remember back in the day when we were using subversion, we committed directly on master (trunk or whatever what that name was at the time), and others had to pull and merge before even considering to commit. And you did not want to catch up too late, otherwise you were running into merge hell AHAH Thinking about that today in the light of this whole video made me think it was not so bad...
When I first saw the gitflow diagram I felt sick at the sight of all those arrows. Every one is a potential merge hell. It's great for the "muggles" (non-developers) that worry about what's in a release but never have to use git directly to resolve a merge.
@@zauxst You obviously have never used “MercilessRefactoring” - you must just leave inconsistencies and design burps build up everywhere… or spend almost all your time merging. You have never tried to do XP and CI properly. Lol…. (Why are devs so soften arrogant pricks?)
@@dafyddrees2287 feels weird saying to someone that is a "devops" by trade that "you never have tried to do CI properly". Anyway, it was a question, no need to put your hand deep in your arse.
@@zauxst I meet loads of people that do devops and dev “by trade” that haven’t ever learned to do things the XP way (including CI.) It’s pretty rare and getting rarer. You’re the one with the attitude problem here mate with your supercillious use of “lol” after demonstrating clearly that you don’t understand why lots of merging would be a problem getting in the way of “MercilessRefactoring” (yes, it’s a thing - if you dropped the attitude long enough to learn about it you’d answer your own question.)
This is why I like the term 'continuous separation' for WoW like git flow. Git flow also reminds me of how we worked in the past and this led to a very late integration of changes causing a big effort in getting a working version of the product.
3 роки тому+4
Thank you so much for this. I've been arguing against GitFlow for ten years. Next please debunk versioning using release dates or git commit IDs.
Please don't use animated backgrounds... they are very distracting.. just do a standard office background. Great content here, but the presentation (background, wardrobe, etc) can be improved. Great work Dave! :D
I don't mean to sound contrarian, but I feel like you didn't do a good job of articulating why gitflow is bad for CI in this video. You seem to imply that it makes it harder to test your code and automate that process, but there are tons of tools out there which can trigger automated tests whenever a pull request is made. Why wait until the code is merged to run automated tests? Additionally, you mentioned that working directly off master gives developers more confidence that their changes are release ready, but this seems to make three key (and often incorrect) assumptions: 1. Tests are thorough and correct 2. Code is well written and meets the company's standards 3. Developers are only ever working on one feature at a time In reality developers are lazy and rarely test their code thoroughly, new hires will often write bad code, and developers are often forced to context switch regularly between tasks.
Well, my take on this, if you allow me, is that CI forces everyone to take a different aproach in how you develop software. For CI to work, everyone must learn how to break down things in smaller and releasable changes, and commit those changes regularly. And that this different aproach is overall beneficial and a better, more efficient way, of developing software. Not because someone says so, but because people who have worked properly with this different strategies found that with a CI aproach you create value way more often. It's not, by all means, an easy thing to accomplish, specially with bigger teams. But it doesn't mean it's not worth doing it.
If you've got simple code and can count on everybody using the most current version of your code, then CI seems like it might work out. As long as you know if the code is correct and reasonably secure. Honestly, if the code is that simple and short, then it doesn't much matter how you're handling the revisions, it'll probably work. But, if you've got something as large and complicated as an operating system, I'm not even sure how you would be able to apply CI in any sort of sane way. Sometimes, the best thing to do is to just use several branches and be done with it.
We developers will do what we are incentivized to do. It sounds like the developers you work with are incentivized to use Grenade Driven Development where they are treated as a glorified typing pool with no responsibility for outcomes who toss the results over the wall for others to suffer with. GitFlow may hide that problem, but it's not fixing it.
@@andrealaforgia5066 "There is really no value in running tests on individual, isolated PRs. There is much more value in running tests on integrated code." In reality, as a matter of best practice, feature branches should be regularly pulling from the integration branch - yes, at least daily. That's where 'continuous integration' happens. With this pattern, the integration branch should always build & pass all tests and merge conflict resolutions should never have to happen on the integration branch. The 'one branch' advocates are defining continuous integration only as regular (i.e., daily) deliveries to the integration branch. With a feature branch methodology you still do continuous integration by regularly pulling _from_ the integration branch. The distinction is at some point just a matter of religion or favorite color, as working with one branch but using a local repo is just a different means of state separation, just as a branch is. Each means of separating state simply has different pros & cons.
I am quite confused about why trunk based would be good at all. Imagine the following scenario: John creates some changes, commits them, and they work locally. Barbara creates some changes, commits them, and they work locally. John pushes, however it doesn't work in a preview environment, and requires changes. Until John is done fixing his changes, Barbara is unable to push, since her changes will fail as well due to John's changes. This could delay Barbara getting feedback for several days in the worst case scenario. With feature branching, John will push to his branch, Barbara will push to hers. The CI will do an automatic merge from master into the feature branches, and both their applications are published to a Preview environment. Barbara finds out her code works, John finds out his doesn't. Barbara merges into main, and John gets those changes on his branch. John can then continue to work on his changes until they work, and merge into master. At all moments in time, both John and Barbara can test their changes, no delay. What is the problem with such an approach? I see no downsides.
The first scenario that you describe is telling you the truth, as long as John's changes are in place the code is not releasable. So his job is either to fix things as quickly as possible or revert his changes. The second scenario is lying to you. John and Barbara both think their changes are good, after all they are working on their feature branches, but as soon as they merge them together, all hell breaks looks and nothing works. They don't find this out until much latter in FB than in TBD. That means that the amount of stuff that they are attempting to merge together is much bigger, and so more complex, and so harder to figure out what goes wrong. The problem with this approach is that the data says that FB produces software more slowly, and the SW it produces is less stable (more buggy). CI produces better results. You can find this in the DORA data from Google, and read about it in more detail in the Accelerate book.
@@ContinuousDelivery I'll make sure to read about both. Wouldn't the hell only break loose when both people are working on closely related elements of the same system? In that case, it could also be an idea to have both people working on the same feature branch. That way they can still see the changes working together, and you do keep the benefits of having the branch separately, such as being able to have a proper review process. The only problems I have ever experiences with FB is when a useful feature (such as a library) was added that you want to use in your feature. I have since solved this by getting automatic branch updates. Using FB releases can even be automated even more. You could automatically deploy feature branches to a preview environment. You could automatically deploy pushes to main/master to staging. And after a tag or release being made, a deployment to production can be automated. With TBD you won't be able to have these feature preview environments. Again, I'll make sure to read the sources you have provided to get a better insight on this. I have not worked on any large projects, only projects with maximum 5 contributors, thus my experience is limited.
@@rafaeltab so now you are doing more work to figure out how to divide up the work between people so that they don't overlap. 😉 The approach that I describe doesn't care, and catches those times when people's work accidentally overlaps.
@@ContinuousDelivery The problem I have with it right now is that the main branch will either not always be ready for production, since it contains unfinished features, or you won't be able to adhere to the rule of 'at least one merge per day'
@@rafaeltab What do you mean by "not ready for production" and "not able to adhere to once per day"? Why would that be true and why wouldn't you be able to have those things?
You need some branches or else how do you do code reviews? Developers should never commit code to master without someone else reviewing the pull request. Tests are absolutely run on every feature branch.
The point is that if you have a strong testing culture, code reviews don't need to be a first class citizen. Integrate, tests green, ship it, refactor later. If every time someone integrates, they break something that your automated test suite doesn't pick up on, then you have bigger problems than what branching strategy you use.
Code reviews are highly overrated. Have your team work as a real team via pair/mob programming and you won't need code reviewa and PRs. PRs were not meant to be used by teams of colocated, trusted collaborators.
@@KrisMeister Your job is to deliver value to customers/users/stakeholders. If you have sufficient automated testing, your feedback loop is much quicker than that of a "lead dev". By all means this doesn't mean the "lead dev" can no longer do reviews, but it can occur post-integration rather than pre-integration which could lead to prolonging the life of a branch even further.
Quite honestly all the discussion you have about what branching strategy to use I think is worthless without considering how you're doing your testing, where you're doing your testing, what environments you have to do that testing and how those environments are used and then eventually how you get to production and track bugs and fix them. In short you need to consider the whole deployment and testing process or the best branching strategy is really hard to pin down. Right now our whole problem is around the deployment pipeline and the automated testing and how to make sure that doesn't interfere with QA testing. In the project I'm currently working on, we are severely limited in the environments we can deploy to and how we can do our testing in these environments due to budgets or time constraints setting all these environments up. Branching and merging are not our problem, the testing and deployments have become the real issue.
Wow, this video really triggered my mental defense system :D I have to say that at a glance, I really don't like that idea, maybe trying it out would change my mind... BUT. TLDR: How do you do reviews? What if I break something and push? How do you track bugs from production? How do you track changes related to Jira ticket? First of all, I would hate to start every day with solving conflicts. It always feels like a waste of time. With feature branches, I have to do it once. And, only I have to solve confilcts with my version. With trunk develepment, I imagine that every morning, the whole team has do to the work, if someone pushed changes yesterday. I know that it would be a bit more smooth, but if I was "required" to push my changes to dev branch at the end of my day, I need to pull first, solve conflicts. Then I can push, hoping that noone pushed anything in the mean time. Then, tomorrow, I have to start by doing the same frikin thing. I am aware that most conflicts are solved automatically with kDiff or something, but it still feels like a burden. Second problem, what if I break something? What if I made all the unit tests pass, but broken something at the system test level? In my project system tests require creating an Azure VM with the whole system setup (we code an app that work inside bigger app like a plugin), it takes half an hour before the tests even start. So, if I push changes, everyone pulls them, now everyone has broken code. Who fixes it? Me? Should everyone just wait until I fix it or should they revert? How do I even know that I broke anything? What if it blocks their work? Feature branches give us isolation and defend us from that, especially with a setup that requires green build before merge. Nothing stops me from deleting all code just for giggles. How do you do reviews without feature branches? Third problem. If something breaks in production, how do you track down what broke it? How do you revert the change? With feature branches, you revert ONE merge commit. With trunk based development, do I need to look for all the commits I made that are mixed with commits of 10 other people? Seems like a nightmare. Also, when do you deploy? At what is there a build with full suite of tests that if failed, blocks the process? If it failed, how do you track down what broke it and who should fix it? Plenty of questions... Happy to discuss and learn!
So: I work under the model described above, and it is vanishingly rare to spend _any_ time solving conflicts. Pulling and pushing frequently (many times a day, not just at EOD) means two different pairs are rarely touching the same code at the same time. What if we break something? We fix it. We have fast tests which cover as much as possible and which we run pre-push, but also slower tests that give us feedback more on a scale of an hour or two. That means sometimes people will pull broken code, but usually subtly and very specifically broken code which doesn't stop them from progressing. We have a sheriff - a rotating role to keep an eye on CI and address any broken builds, which usually means going back to the pair that broke it to work out the fastest fix (usually a revert, with a fixed re-apply following). To continuously push, you need either continuous review (eg pair programming) or trust. If you don't have trust, then drop everything else, that's the single most important thing to build in any engineering team. Every bug in production boils down to one commit. Reverting a large feature branch which contains any refactoring or reusable utilities is likely to be a merge nightmare: granular commits are much easier to revert. The trick is identifying what the bug is and how it's happening - which is kind of orthogonal to how you push your work. In general: only deploy something which passes all the tests. That might mean, if you have a slow acceptance loop after your fast unit test loop, you probably want to mostly wait for the slow loop to conclude successfully before deploying. There may be circumstances where it's pragmatic to circumvent slow tests to get a fixed build out faster, depending on your domain and its risk/opportunity profile.
@@TARJohnson1979 in my team we have an intern and a aspiring junior, who need some eyes on them, review and feature branches are great at that. They work in their own pace, we give them feedback, then merge. Trust is one thing, but you still have tests... Don't you trust yourself and your colleagues? :P So in short, you have a person guarding order, we have automated blocks in your way to prevent you from messing up. I like that single commits are easier to revert, that's true, but i still wonder how do you link your code changes to a ticket in your work tracker. I guess you put ticket number in your commit message and then have something that easily finds proper commits. How do you do working on two tickets at the same time? You just start coding next thing and just put different number in commit message? I think i would really need to work in this manner to get a proper opinion. I would really like to try it.
@@Qrzychu92 So, what we do sounds like it's different from what you do along a whole bunch of axes. For example, we don't have a concept of work tracking. We have tickets, but that's there to spell out what we're trying to do, not as a running status update on what we're doing. The linking between a commit and its story is just a reference in the commit message, and moving from one piece of work to another is just picking up the next thing, no real overhead to it. Trust is multi-dimensional. I trust my colleagues not to maliciously damage the codebase, for example. I also trust them to know what sort of testing is needed for a given piece of work, and - maybe most importantly - to know to reach out for assistance because they don't know what they need to know. I don't trust them to just get code right first try, because we all know that's not something people actually do. That trust has been established through collaboration, though - it's not something we just assume is there. As for interns / juniors, my experience is: pairing works really well for this, but isn't sufficient. Sometimes, you've just got to let the get into the weeds at their own pace. That's a context where working in isolation followed by a review and discussion makes a lot of sense. But that working in isolation then seeking review: that's not about how we develop software, it's about how we develop team members. It's a different activity.
"When do you deploy?" Ideally, as soon as the tests pass. "If something breaks in production, how do you track down what broke it?" This is where good CD practice comes in. If the test pass, ship it. If it breaks, it's a small change to roll back or, preferably, roll forward. "Nothing stops me from deleting all code just for giggles." You have that situation now. "How do you do reviews without feature branches?" Pairing. If not, you have very short-lived feature branches and eat the waste of wait time for code review. Reading your test environment situation, if I were on your team I would map the testing process including the work time and wait time for every step and re-engineer for faster feedback. If the build is broken, the team stops and fixes it immediately. "First of all, I would hate to start every day with solving conflicts." This is very confusing to me. Why would this be the case? You start off your day working from a new copy of origin master. Conflicts are exceedingly rare. I only get them when I've held onto code for too long before pushing.
@@BryanFinster @Tom Johnson So, in short, instead of branching and revies, best practice is to do pair programming, which makes sense. Never done that to a serious extent :) As for deploying as soon as the tests pass, in my project tests take with the whole environment setup take up to 4-5 hours, which means that there would be a high chance of someone making new commit in the mean time. This is why I like the idea of a release branch - you push code to it, then the pipeline takes care of everything else, if tests pass of course, but you can run them on pull request, before merge, so the branch remians "clean" and working. As for nothing stopping me from deleting the code - to merge to develop branch I need at least one approval from someone other than me and a passing build (on PRs to develop we run shorter suite, around 30 minutes). Work tracking - well, our product has 24/7 hotline for customers, we have on-call duties and we need to track when and how we fixed things that came from the client, so PRs and "aggregation" of git blame is very helpfull. The most difficult thing lately when we moved from quarterly releases (yes, but we are making progress!) to CI/CD is to keep track which ticket was done/fixed in which version. We need to automate that. Last thing, the conflicts. Yes, I overreacted :) even with branching I rarely get to solve conflicts by hand (kDiff is really good!), so you can ignore this point. To sum up, the whole thing is much more than GitFlow vs trunk. It's completely different approach on so many levels - pair programming vs PRs and reviews, staying on course vs tracking progress, having develop branch in production vs having a release branch and distinct versions. I need to take a deeper dive into this, maybe we will make some pilot sprints (do you still have sprints or kanban works better?), becase the more continuous is our work, the less I like the gitflow, but this is just the opposite of the spectrum. How mission critical are your products? Do you feel like your methodology has impact on the stabillity?
We branch per ticket, they are often only for a day to around 3. You said that why branch if the work is so small, well it makes code reviewing and rollback so much easier as you don't need to work out what commits made up a task, it only costs you about 5 seconds to make a new branch. You can then keep that branch up to data by merging any changes from the origin master (or from other branches that are working in a similar area to you) into your working branch, once the developer is happy they can merge back into origin master or Pull Request depending on you company setup to be automatically deployed to the QA env's. I would say the flow you mentioned risks pushing to origin master to quickly and potentially committing breaking changes that you know arnt complete ( as one should also commit often ).
The first time I saw GitFlow, my reaction was: 'Guys, you cannot be serious! Why would you do such a complex thing that is not CI friendly?'. Then I saw a lot of people praising it, and I though: 'OK, then it must be just me being blind, maybe they know how to practically run CI on zillions of branches'. Dave, thank you for explaining me that I am not blind :).
@@ContinuousDelivery When you see people attempting to do CI with gitflow and have zillions of branches being built - that's when you know CI has gone through what Alan Kay called "the great low pass filter of life" ;-)
Interesting concepts. In the 1990s, using Perforce, we developed a 2-branch system for a large engineering system which was mission critical. We had about 60 developers at the time. A key assumtion of the system were "hotfixes" were absolutely banned: everything had to be a formal release. The developer is the least important actor in this business scenario, the customer is the most important. As such, when a developer hit 'submit' it was part of their professional responsibilty that they had integrated and tested that software against 'head'. We had a visible webpage that displayed broken 'submits' as they occurred. A key part of this philosophy was an comprehensive software build system. Each developer could build the entire massive system from blank disk in minutes. We had automated testing. So expecting a developer to test their change was not overly onerous. Having said this, there is very little about the Git environment that I like, compared with Perforce. Perforce, with it's reservation system, is better suited to serious endeavors with critical software.
Wonderful content. I think the same mostly. But when there is juniors on team sadly it might not be possible to go complete ci since junior dev codes although it is working it might need refactoring and review. And we are trying to do pair programming as well but there are some limitations for that like timezones etc. Outside of this I strongly agree on what you think.
Most of this I consider just contextless bias, but indeed forcing people to think about commits in the more incremental way (i.e. my commit cannot break the build) sounds pretty nice, I like it.
Interesting. What you call "contextless bias", I consider "common sense". I dismissed Gitflow as soon as I saw it because I thought about it for about 30 minutes and saw how unnecessarily complex it was and how it just compensates for issues elsewhere in the development process.
Continuous code review using ensemble working is a good option. You cannot inspect quality in afterwards anyway, so the best way to make sure you ship working software is to review while writing it, using two or more brains at the same time.
Nah there are many testing tools that can be used also with branching and the whole branching> pipelining> CI/CD cycle only with one branch is not selling it to me
"Just keep a local copy of master and merge when you are done" is (manual) feature branching. With the added risk of lost work that you never pushed. Just like we developed softwares before we had VCS that was good at branching.
"Just keep a local copy of master and merge when you are done" does not mean "two weeks of work". It means something like committing/pushing every hour or so. That means committing incomplete/unfinished code, that still has to work, but that remains unused. It requires a different way of working, thinking, designing, testing, building, communicating.
If all the commits are randomly mixed in a sequence for yet incomplete features that might take weeks or months to develop, how are you going to remove experimental features that don't make the cut if they are hundreds of commits spread over thousands of commits? how do you even make sure those removed features don't leave skeletons behind?
It's even worse. What if i have to fix a bug urgently on the current production version, but there are already committed half of next features that doesn't fit into the current version?
You make a great argument if you have a live running product with only 1 release. If you have to maintain multiple releases, you need multiple branches.
There are other ways to handle this too. One approach is to keep one version, but that configures itself on start-up. HP LaserJet team, did this for all of their printer products, when adopting Continuous Delivery. The result was a very dramatic reduction in the cost of maintenance.
You're misunderstanding (wilfully or ignorantly) that there's two processes being discussed, that there's commits in both is neither here nor there. Wait until you get into the whole rebase vs merge argument that's gonna totally blow your minds...
@@marshalsea000 I meant there is no any entity like "branch" in git. it is just a pointer to a commit for our convenience of working on commits tree :) there is only a single tree in git
The idea that we're testing an out of sync version by branching is a fallacy. When pull requests are created they contain the latest develop code merged in. Thus they're accurate at that time. If develop changes conflicts will be shown. Thus forcing you to resolve them. When a feature branch is merged to develop or has direct commits, CI is fired off. Thus the develop branch is always tested with all changes together. Nothing gets into main/master without being tested correctly. And in addition, since you're using branches you have the flexibility to decide when to release changes and have options like parent feature branches. With the CI approach you describe no code reviews are happening and you don't have the option to work in isolation. This is really bad for any significant application.
> With the CI approach you describe no code reviews are happening Why do you think code review should be coupled to merging a pull request? > and you don't have the option to work in isolation. WON'T FIX. Working as intended.
Great explanation. I looked at GitFlow once and decided I wasn’t ready for that sophistication. Now I see why it’s mutually exclusive to CD. At 10:00 you described your work flow. I’m not sure if it was just for simplicity or if I’m missing the bigger picture here but shouldn’t someone have reviewed your code before you merged it into origin master?
>shouldn’t someone have reviewed your code before you merged it into origin master? Yep. The guy sitting next to you in a pair or the guys sitting around you in a mob. Or, in today's pandemic terms, "sitting".
@@andrealaforgia So how does the person "sitting" next to me see the code that I'm writing? I mean I exchange snippets of code through Teams, often even as screenshots, but eventually I still have to share it somewhere so they can take a look at it. That place is usually a separate branch that does not mess with the production code. I see no desire to "pair program" on the branch that is the "correct version" of the code. There's a lot of talk in this channel about "features", but it is very rare for us to have "features" that take less than 1 day to develop, so having CI in this fashion makes no sense.
I wonder what our model is. We develop and test changes locally in a develop branch, and deploy that to a QA testing environment where it gets tested by a QA team. Once we fix all the defects the QA team finds, we create a "release candidate" branch that we deploy to a user acceptance testing (UAT) environment where it gets tested by key users. Development continues in the dev branch while UAT goes on. We fix defects that users find during UAT in the RC branch, and immediately merge those changes back into the dev branch. When UAT is over, we create a production branch and we deploy that to production where it gets used by all the users. If there is any break fix work in production, it gets fixed in the production branch, and merged back into the RC branch, and further back into dev. But we almost never do any break fix work in production. The problems in production have rarely been so bad that users can't wait until the next release for them them to get fixed.
Some interesting ideas and insight in the video, but too dogmatic and clickbaity which is unnecessary. It also sort of lacks context. Not every client/customer arrangement includes spec changing / tweaking / testing on a daily basis. Having sprints after which feedback is collected and addressed is a viable approach. You don't _need_ daily feedback / micromanagement. Having to implement tweaks based on feedback also doesn't mean everything you did before all of a sudden becomes invalid and gets chucked out the window. You simply improve things incrementally.
How would this work with code reviews? Often feature branches offer the benefit of code reviews as well. Was thinking that could possible by making release branches from master/main and reviewing then? * Sorry I mean merging to a release branch so you see all the changes since the last production release together.
My preference is pair programming, so there is no separate "code review" step and so no need for PRs. You get a better code review than code-review alone and lots of other benefits with Pair programming.
@@ContinuousDelivery @Andrea Laforgia I'm repeating myself, from above, but it seems it's necessary: Code Reviews and Pair Programming are different things and not interchangeable. Working together with someone on the same code leads to a different perception of the result then for someone with fresh eyes. So even for teams in the same timezone code reviews should be done asynchronously. That is: asynchronous code reviews have a different set of advantages/disadvantages and are not just borne out of necessity. While I think pair programming is highly desirable it doesn't make code reviews expendable.
@@bitti1975 No one has ever said that, with pair programming, code reviews are expendables. Async code reviews are though. It all depends on what you mean by code reviews. If you mean PRs, yes, they can be removed with pair programming. Code review is a fundamental activity that you make *continuous* with pair programming. You shift left the code vetting, decision, agreements you would normally perform in a PR. The best way to review code is to make decisions whilst working on it, not after. There is a mountain of evidence that proves that pair programming is highly effective, and mob programming is even better.
@@andrealaforgia I explicitly specified code reviews as an asynchronous activity. So yes, even you seem to think they are expandable. Saying "The best way to review code is to make decisions whilst working on it, not after" is just a postulation, but at least it means you acknowledge that there is a difference. Some things are easily overlooked in the heat of the moment so while it is desirable to improve code as early as possible, some things can only be seen with a certain distance. I don't know why you have to reiterate that there is high value in pair programming though, which I agreed to anyway.
That's an extreme view of CI. There needs to be SOME delay, on a consumer system anyhow. You can't just publish every crap without review. But you CAN also publish your develop branch, if you are brave. If you want your changes to work, just keep rebasing...
CD does not mean publish/release every commit! It means publish the latest version/build that is regarded as "good" at your choosing (by your context/definition). As the saying goes "If you can't deploy right now, it isn't Continuous Delivery". Rebasing does not work if everyone's changes lives in their own branch. You will end up with lots of changes that are unintegrated since there is nothing to rebase with.
We decided to skip develop and go directly to master literally in the previous sprint. After years of using develop to increase the confusion over the current state of the system, everyone finally seemed to have understood why it's a bit counterproductive in the long run. And yes we will be having issues merging those big a** features that span across multiple branches because we're yet to incorporate feature flags, but we will manage and I call it a win already. Thanks for the content again Dave. Great to see an aposteriori confirmation that what we did was indeed correct. And the fact that my dev team watches and actually enjoys those, means you're really nailing it.
It's about fear. When you work with too large a team of inexperienced and unmotivated people, feature branches are a way to prevent their work from being merged back without a session of some kind of oversight committee. It's horrible, yes, but it kind of does serve a purpose in a peculiar manner.
Yeah but this idea that a team is a group of people where there are untrusted members supervised by trusted member is really bad. It encourages ivory towers. A team should be a group of people working together: get the senior one join the junior one so the latter soon become trusted. Teams are there to unite people, not to segregate them.
Yes, I understand that, my point is that it is a really bad response to the fear. It is a bit like being afraid of anything else, if you do less and less of the things that you are afraid of, your fear will only grow, and eventually you find yourself living in a cella, eating cold baked beans from the can with a silver-foil hat on your head. Hiding from the fear is a poor response, instead you need to deal with it in some manner, with some care, the danger may be real, but hiding only makes things worse not better. I have seen many companies that can't release software at all, despite having lots of people employed to do so. This is a result of retreating from the fear. The reality is, that if we want to create software in teams, then we must allow people to make changes. The way to make them careful and cautious in making the changes is to make the consequences clear to them. You don't do that, if you abdicate responsibility for the consequences to some small group of over-worked gatekeepers. The date is very clear, moving more slowly like this results in lower, not high-quality software. (See the "Accelerate" book by Nichole Fosgren et al).
I disagree. Speeding up the feedback cycle is beneficial, I like that. What I disagree with is the idea that automated testing is the best form of feedback. First off, in an ideal world, you should be able to run a significant portion of, if not all, of your tests on a local development machine. In some large projects, especially with many many microservices, this is difficult or impossible, and in those cases other solutions need be found. But primarily, having feedback from other developers on your changes is more important, and in a CI setting you need extra tooling which in my experience is often quite fallable and confusing to be able to review the changes made for a single feature. Feature branches and pull requests give you a way to have feedback on your changes, and Draft pull requests are criminally underused. In the end I think your arguments make sense on the assumption that CI/CD are the best way to do development, but I think that's a false assumption for many teams and projects.
@@andrealaforgia5066 Yes, contract testing is an important part of the solutuon for largescale projects with many microservices, and is part of what I meant in the umbrella of "other methods". In those cases end-to-end style integration tests would still be hard or impossible to run on a local machine, and that's what I meant. Large systems aren't an excuse to test less, they're a reason to test more. Also I don't equate peer review with pull requests, but I actually think asynchronous peer review over pair or mob programming is important. Multi-dev programming is a useful tool people should use, but it puts all the people involved into a similar mental space while building the code, following a single chain of thought, just with more minds making it more robust. Asynchronous code review, when done properly and not as a rubber stamp, can ensure that the changes make sense both in a how and a why perspective to someone who doesn't have the same thought process, and can act as a litmus test for how maintainable the code will be six months later when a change needs to be made and nobody remembers the original thought process.
If someone tells you that there's only one good or best way to properly build software, regardless of the project scope, project type, language used and team make up, be afraid! No one process is flexible enough to meet the demands of every possible implementation. It's almost like a certain channel owner is trying to sell books or a training course on a competing subject.
(CI, CD, and TBD have all been proven to predict (yes, "predict", not "correlate with") higher performance in software organizations, as per DORA and State of DevOps reports. You can learn more in the book Accelerate if this topic interests you. The book overviews the research methods and more.
Indeed. For example, we release every 6 months (used to be 1 year too) a new version. It's a windows desktop application, users have to install/upgrade manually by running setup.exe.
Just FYI, such things also still exist. (it's a financial business application with more than 20 ys codebase)
Imagine being this defensive about the way you do work lol...log off once in awhile buddy, you look petty otherwise
@@zzzzz2903 we build financial software. One of our products is a Windows desktop application. The teams that build it use CI/CD. They always know what state the executable is in, though they only release on a predetermined schedule. I don't know why you think there's a conflict there
@@joshbarghest7058 "Continuous deployment is a strategy in software development where code changes to an application are released automatically into the production environment."
--
So if you release every 6 months, what do you mean with CD?
Also, there is no "production environment". There's 600mb setup.exe. Based on our big customers update cycle (which is sometimes years!), they pick the latest setup.exe at that point, and upgrade to that.
Again, what is CD here?
You guys ALWAYS forget non-web applications. In case of these genres "continous" only means "as frequent just as possible". In embedded world, the most frequent release cycle can be even a month long. (Or, probably, there will be only 3-4 releases at all.) And we are not allowed to release, hm, not-too-stable stuff (I tried to re-phrase the word "crappy"), because it's not option to wait for the next release for fixing it, because a bug might be dangerous IRL.
I can relate to that - when the software you write controls a 500 horsepower machine and kill 10 people in the blink of an eye…
@@stephanegeorget1715 Even a machine producing bad coffee until the next update is unacceptable, but yes, cars are the best examples.
You might only be able to release your embedded software once a month, but can't you still integrate branches within the repository daily?
@@barneylaurance1865 not always. For a period I worked on projects where we could hurt or kill our testers if we didn't take proper care. so for safety reasons we had an extra branch for test and before we did manual releases to the test branch we actually went through not only automated testing but also code reviews (manual and automated) before releasing. Yet, mistakes happened, though no-one was hurt as long as I worked there.
@@karsh001 OK, so you had to delay delivering the code to the testers to do code reviews for safety. I'm still not sure why you have to delay delivering it to your programming colleagues. I guess you work with an emulator or something so you don't injure yourself when you're writing it.
Instead of choosing what git strategy to use, its better to beef up the testing first... Whatever git strategy you use, it will be useless if you don't have proper and robust automated testing
Yes that's a given but you have to choose a version control strategy.
@@andrealaforgia5066 I believe the test suite is the premise for the whole CI thing to work in the very first place. You could blindly (without any test) commit to the trunk but then, when an issue occurs ("is discovered" would be more accurate), there's no way to tell which commit causes it. That takes time to investigate and make people doubt the CI approach. Sooner or later, they will switch to the feature-branch approach make sure issues are well managed/isolated which actually gives a false sense of security. Adopting CI is matter of choice but having a robust test suite is the matter of implementation.
Testing using git-flow is much more aligned than having a haphazard trunk based flow approach. Git-flow naturally allows a dev branch to be properly tested during a sprint BEFORE it merges to master (our single source of truth) and BEFORE it gets released. GIT-flow also helps to manage release notes.
Amen to that. Im currently working in company which do all testing by hand. You hawe no idea how many restless nights our tester yes a one tester hawe.
@@mikebell184 You are using the Horse-and-Buggy argument. "Our horses work just fine. Horses are better than cars because of XYZ."
Yes, GIT-flow WAS amazing. It was great for its time. It's time to move on. No more, develop, master, hotfix, whatever.... It's time to have 1 source of truth. Whatever processes these are steps you would do the test/catch bugs before you merge develop into master, do those same exact processes and steps to each individual branch before it makes it into trunk. So that trunk at any moment is releasable. There's no ambiguity on whether trunk is ready or not.
I work in an environment where Continuous Integration is not feasible. Git Flow works exceptionally well for our team.
Curious to know about the environment
@@kishanbsh I don't exactly want to give specifics, but it's pretty highly regulated, meaning that every development needs quite a bit of design and approval from higher ups. We often work on developments that are quite large and can be rejected by senior people at the last minute. Removing integrated code is much harder than just integrating "manually", i.e. git merge, as soon as we get the green light.
Looks a lot like good old waterfall...
There are better ways to work, but not every industry adapts at the same pace.
I guess you can't release every day or week but more likely every month or trimester am I right ?
Not every idea is feasible without changing your mindset.
Last minute changes? - Bad.
Rejected at the last minute because of senior people? Why weren't they there sooner? - Bad
From the description:
"What is GitFlow and why is it a bad idea if you want to practice Continuous Delivery or Continuous Integration?"
I like branching to isolate changes which "aren't ready" from everyone else's changes.
But I also like frequent rebasing, so that everyone else's changes aren't isolated from the branch.
ie: the integration is continuous, but unidirectional. And this also encourages one to break changes into the smallest useful unit, as "being done" has a direct incentive: not needing to be the one who deals with that integration.
it's very similar to CI, but admits that some changes really do take more than a day, and that merge-commits act as a useful label for grouping related changes together.
Always get latest, deal with the fall out in your branch, squash and rebase on top. Nobody needs to see all the crap commits that went into building the delivery. Next argument will be but I've got loads of individual parts... Your po is doing a crap job of managing the project and breaking things up wrong... Suspect your using jira which teaches baaaad habits.
@@marshalsea000 I've got loads of individual parts, and I'll break them into the easiest-to-read commits. Squash the corrections into the original, but don't make me read about a change to the API at exactly the same time as the new method which justifies it. The justification belongs in the same PR as the change, but not in the same commit
@@________w sounds like you're dealing with a monolith.
@@marshalsea000 I tend to call any defined interface an "API". made-up example: needing to support a new type of authentication token, so commit 1: add a new "token type" parameter / ensure it is accepted; commit 2: add support for a second token type;
each commit can be read in isolation and makes sense on its own, but the first commit is only justified by the presence of the second commit, and the second commit requires the first commit as a prerequisite in order to be a non-breaking change.
But how do you assure that the code in your branch works? You are running all the tests (including Integration and End-to-End tests) on your machine each time you merge or rebase your local branch? How are you sure the code will work on some other machine before you merge your branch into main?
14:20 again with the claim that feature branches aren't being tested in integration with other changes? But it's perfectly possible to have the CI server merge changes from master before building - and notify you if changes can't be automatically merged. And yeah, that means you're not testing the integration with work on other feature branches - but as you said, that's the intention, to give other teams the time to refine their work. I'll keep following and listening, Dave - but there are still two unanswered questions for me with regards to trunk based development. One, how do we avoid wasting everyone's time with half baked code that needs more than a day to set? And two, how do we do code reviews in practice? These two issues compound, in the form of half baked, unreviewed code ending up in production daily. While that may be acceptable in some environments, it's another situation for teams working under legal oversight or with life critical software - are you really certain this is right for everyone? I'm still watching, but still don't feel like the central issues are being addressed. 🙂
Ensemble working and continuous code review are what you're looking for. You cannot inspect quality in, quality has to be built in. As for half-baked code I don't understand what you're talking about, why would anyone commit code that is not complete? You can hide partly developed features and changes behind feature flags for instance, if that's what you mean.
@@ottorask7676 "why would anyone commit code that is not complete" - because they don't want it to get lost, for example. "behind feature flags for instance" but feature flags beat the purpose of not having branches which is "finding out that my code is wrong as soon as possible" .
You think that "one day" is literal? fetch and rebase is the answer.
From my current understanding of the topic, the most important part of the trunk-based approach is to have tests. Not any hollow unit tests but a complete test suite composes of different kind of tests: unit test, integration test, functional test. A test suite that when you see the GREEN, you know that this is production ready. Every single breech on production should be treated seriously to enhance the test suite.
So for any commit, either we get a green on the test suite, or we rollback the commit. Then we don't really need to do the code review on every commit, and this could be a review/improvement process even after the code is committed, not a safeguard check point. As long as we treat every kind of breach seriously, code review shouldn't be an issue.
As for half-baked code, for every new feature, there will be multiple commits until the feature is usable. Though, as long as those commits are not breaking current system (passing the test) that should be fine. The feature could be hidden until it's complete but we're still be able to test the new feature together with the current system. So at any point of time developing the new feature, we know that the partial feature still works well with current system without enabling it for end user. And you don't have to release every commit to Production on a daily basis. Still, with the CI approach, there might not be a clean cut where we could find a commit with no half-baked feature to release to Prod. That's exactly when the test suite gives us the confidence to release to Production.
If everything work in tandem such as this, there shouldn't be any issue applying this approach. Then, it's crucial to make sure everything works in tandem.
Feature flags and tests that test both “feature on” / “feature off” states. Develop & merge frequently
Very strong disagree with not having branches imho. Having to work within the CI workflow is extremely annoying when developing entirely new modules to a repository with few dependencies and that no one actually uses yet. In those cases, you very definitely do want to have a feature branch. For making small changes to an existing module, this is less of an issue.
Problem is, many companies doesn't use CI/CD. For Pete's sake, many companies doesn't even test their code before committing ("there's no time to write tests", "it will take too much effort/time/assets etc, maybe in the future", they say). So we have to stick with feature branching, merging regularly and praying that no one breaks the master branch. Sadly.
There's no time to NOT write tests! Failing to write tests is an extremely selfish act, forcing your technical debt onto the shoulders of your successors. Don't let your name become a curse word because you *will* live on in infamy in the commit logs.
This is us and atm I hold that opinion. We are 3 people handling multiple bigger apps. One standalone and atm 3 build out of a in house framework which share 80plus packages and are modular. We built it ourselves and at at planing stage 5 years ago also decided that we cannot afford it.
Would you try to convince me here? I am very curious about that especially as the new one of us now writes test for his stuff.
been there... left quickly...
@@Chiramisudo that's an invalid point.
have you ever took loan? mortgage?
tech debt is a very similar thing. you get what you want now, and pay for it later. and pay more.
why you want to pay more? cause you made a deliberate choice: having a thing now is more important, than some extra money in future.
so having a tech debt might be a very reasonable thing. But you must control it. Same as extreme monthly payments on all loans will crush your budget
@@krivdaa9627 A poor analogy. With a mortgage, it is YOU who is responsible to pay the debt and not your successors.
The ONLY justification, in my mind, is when the company will literally go bankrupt and cease to exist in its current form because it failed to deliver a product before running out of funds. Maintainable (readable, testable, etc.) code is THAT important.
What about a situation where none of the methods seems to work well: You need to make a fundamental architectural change to your code. Maybe some central module in the code requires completely different approach to it. Refactoring would take 10x the time or simply rewriting it. Refactoring can be done in small steps but would be extremely slow in this case. Complete redesign and rewrite would be the much faster way but you would need to touch lots of areas in the whole codebase to make the change and you can't commit the changes before every part of the code has been changed to use the new module. Thus it sounds like a "one man job" while others aren't allowed to touch the code base at all. A tricky situation. Any suggestions for times like that?
if you are talking about making incompatible changes to the public API along the way, you are better of making a new repo)
Why are you claiming that refactoring would be extremely slow? Being able to make changes in small completely working steps is ideal. You can quickly integrate each of the changes, and move on to the next one with confidence. If the hold up really the refactoring, or is the hold up a slow release cycle that is throttling your integration to one step every couple weeks?
Doing a complete redesign is almost always actually slower. People who claim it is "faster" to do a high risk rewrite are usually just counting the time to write the first draft. The cost of a change isn't just the time to draft the new code, but to test it, and go through all of the debugging cycles to fix the regression issues.
@@tube4Thabor so would your commit message be something like "Refactoring _____ WIP" if you couldn't finish a particular refactoring on that day?
@@thatoneuser8600 The hypothetical we are working under said the refactoring could be done in pieces. So the commit message should state which piece you actually did and why..
The problem with a massive change in one hit is that it is almost impossible for people to effectively review; the review cycle alone may span weeks, by which time the branch is stale and you probably need to fix conflicts.....and that's when the bugs creep in.
Much better to split the change into smaller tasks which the rest of the team can keep up with. It's fair to say that it *is* more work overall, but the end result is more likely to be better quality.
I've been there and done it both ways. For me, velocity trumps everything; stale branches are the enemy.
This would create a chaotic trunk history. Where you would rebase on a feature branch to simplify history, doing so on the common trunk is nearly impossible with a distributed team.
It also makes pushing code to the origin more of a headache. I recently had to work in an environment where the machine hosting the local repository was unreliable, and pushing to the central repository was the only way to back up. Using your approach would have meant that I would need to push incomplete versions of my features to the trunk just for a backup (or create a temporary feature backup branch, which seems antithetical).
Lastly, your whole framework seems to be heavily reliant on timeline. It might make sense for a one-day feature, but if that feature suddenly grows into a multiple day task, then you have to worry about finishing quickly just so you don't start lagging behind the current truth (whereas with a feature branch I would simply pull master and rebase my feature branch to refresh the lag from the truth I was working with). Eventually you might give in and truly make it a feature branch, then revert your master and pull the latest before reading the new feature branch, if you deem it is out of scope for a single task. This is an arbitrary, self-imposed limitation that almost acts as a punishment to estimation errors (which are prevalent).
I think simplicity has its appeals, but ultimately trying to conform to some theoretical goals while ignoring the practicality leads to issues like those I mentioned. Git-flow has issues, but dev teams should use it as an inspiration for a workflow that better suits their needs. To me, focusing on improving testing, beefing up CI and deployment robustness are all more interesting than striving to adhere to some theoretical metrics.
I think his mentioned approach only works when the team member are evenly competent. Whatever team size it is, if you have a couple of intern devs inside you team, things could go wrong soon.
I worked in a project with aroung 50 devs across many countries and competency (some are from short-term outsourcing companies), if we used his approach should have been nightmare too.
My 2 cent is, frameworks and libraries are mere tools to developers. We use them the way we see fit to get the best out of them in particular use cases. We are their masters, not their slaves
that's not event the worst case scenario! The worst is "Dude, please __unmegre__ / countercommit all you intermediate commits form the common trunk - you feature is getting postponed for some reasons for several month".
The second shitpile is "how to review the code?". The feature branches model gets a perfectly good answer on that: reviews are done on pull-requests.
... and if (when) you want to intergrade your code.. just rebase on master - and run tests. When your are ready to to be included in release - rebase feature on release and run all required tests.
the guy simply makes HIS work easy at the cost of introducing the hell-on-wheels to the Dev side
@@hungluu902 some fw and libs become outdated soon, so we came up with this rediculous ci workflow. enjoy.
@@krivdaa9627 you are not getting the point. If you develop the same way you are developing today, yes, it is going to be a mess. In continuous deployment you separate deployment from release, e.g. by branch by abstraction or feature toggling. You have to use different techniques, but you gain a lot. Open up your mind and try. I would never want to go back to feature branch hell, stop the world releases, ...
What if some of your tests are bad (sh... happens) but a huge logic bug is hidden and was hidden for two months and it is so complex that can't be fixed in a day or even in a month in the current version because all of the latest features were based on the part of code infested by that bug?
Users have noticed that bug and you just have to rollback production to one of the previous releases where it can be fixed in minutes. How to deal with this situation without branching? Maybe this CI/CD flow is based on the "Happy path" assumption?
Thanks for the video. Quick Q: How do you handle Code Reviews?
In my experience with after-the-fact reviews teammates tend to forget or get lazy. With feature branches you can add checks to enforce reviews in pull requests. I'm not talking about catching failing builds, but rather knowledge transfer for new devs or mentoring for juniors.
Thank you.
Same question.
In short: pair programming and pair chaning on day to day basis, even for the same tasks.
but you're still doing pull requests in this model if I get it right, aren't you?
@@ApodyktycznyCzlek depends, the goal is to remove the need of merge request because when you collaborate with half of the team for given task the merge requests become unnecessary, because almost all the things caught in review would be corrected while pair programing. Of course you won't get there overnight, but when you will start to notice less and less comments in the code review then you can start shifting to only pair programming, without code review (or code review on demand)
@@arch126 I find constant pair programming tedious and inefficient and prefer pull requests for their async as they're much more async.
Gitflow, my old nemesis.
I think that I have had more discussions, in my past jobs, on interpreting Gitflow operation as opposed to discussions about Gitflow projects.
It's really just a waterfall-based project management tool (in my mind) which makes it a bad tool for CI/CD anyways.
Good discussion!
Cheers,
Oh boy, you really didn't understand how it works, did you? Otherwise you would know you talk bullshit. GitFlow is great, and it is agile. Feature flags are cancer, and trunk development is something we were doing 25 years ago. And Subversion was perfectly fine with that. Do not speak about stuff you don't know.
I used Martin Fowlers excellent guide to branching patterns some 20 years ago to set up software development processes using RCS, PRCS and then later SVN. The branching patterns you use follow the kind of software you develop and the way you want to organize your team, so there is no "one solution fits it all" (as so often).
Can you specify book / publication? I know the author but don't know this one.
Hmmm, Normally I watch these videos and nod along as the suggestions/ideas match my own experience or don't seem particularly contentious, but this is the first time in a while you have really given me something to chew on.
I had a pretty visceral defensive reaction against this one and I think I have to go and figure out why
and revisit my assumptions.
Thanks for keeping things interesting :)
I was also nodding along up to the point where 'bad automated testing' was the reason for needing the production branch. We keep one 'main' branch (aka. master, dev) that has passed automated unit and integration tests. We have customer acceptance and regional regulatory compliance requirements that must pass before dumping our changes out the door, however. Maybe that means we can never do CI/CD?
It's also not clear to me how you'd easily conduct A-B tests; maybe project fork and parallel project that deploys behind a load balancer? I'm sure we could make things complicated enough to solve any problems.
I feel like GitLab Flow is a closer fit for our workflow, but I should probably revisit my opinions and assumptions, too.
So far I have enjoyed reading the comments. This channel has been attracting knowledge people. I wonder if there is a discord channel where people could extend some of the discussions started here.
Literally this comes a month after I suggest that in replacing our old versioning approach with Git, that we should work off of trunk alone... an idea which was thoroughly shot down in favour of the GitFlow approach. At least we're now only a decade behind the standard, instead of 2 decades behind...
🤣
Just release more often. Soon enough the master branch will be exactly the same as develop. That way you can bypass the entire discussion.
Make a fake email account and sneakily spam your workmates emails anonymously with a link of this video until they come around
😂
Gitflow is best when someone needs to make a complex change. But - and this is what Dave leaves out - gitflow requires someone (team tech lead usually) to be aware of what people are doing. That's often left out of the discussion, but it is the most important thing of all.
10:37 Committing to local master then finally push to central master VS using a branch then finally merge to local then push to central master: they are the same, the result (central master) is identical.
With GIT, branching is extremly cheap. If I use SVN: I never make a branch. GIT: never think before branching. Inserting some temporary debug logs to code? Make a short-term branch for it! Pass some half-baked-but-working stuff to a colleague for demo? Fork a disposable branch!
The point is not the final result.
@@andrealaforgia When I say "result" I mean the whole process. Local changes VS private branches is only a "technical difference" (local changes are dangerous, the changes will be stored on the local machine only, not in the central repository)
Git is for people with anxiety. Branching is cheap but the mental context and maintaining where you are is not. You are really giving yourself more work in the end, in the name of feeling "safe"
If you push your commits to master as soon as tests are green where (or when) is the place for code review?
YOLO-oriented development
The inability to review and reject changes to develop BEFORE they get there alone is enough reason that this whole TBD approach should be a nonstarter for almost any non-trivial project. This guy is just so upset that SVN fell out of favor that he is trying to get everyone to use git like it is SVN lol.
I've done this to nearly all my projects without knowing about this, simply because it made sense. It's nice to know that I'm not the only one who came up with this idea.
If you're the only one working on a project why would you even need to branch?
@@RayZde Stability and not forcing your customers to have to use one specific version. I've had too many software products where one version won't work, but the previous version and next version do. I've also used a bunch of software where they provide free bug fixes for the life of the current major release, but they charge for major updates. It seems rather difficult to do both of those things if you're not branching.
I realize that it's fashionable these days to not know the difference between major, minor and bug fix releases, but it is rather important if you can't guarantee that everybody is going to update to a newer version, or you're charging for major updates. Sometimes a major update means that the hardware that worked for the previous version just can't be supported, but you can't/don't want to leave that software unpatched because there's still significant numbers of people using it.
Your comments at 14:42 resonated with me. I've, thus far, stopped short of CI and instead used small frequently merged feature branches, but you've convinced me to try proper CI. Thank you.
I am pleased to be of service, I hope you enjoy it.
We've used a variation of gitflow when multiple concurrent versions (sometimes major) of the software need to be maintained. Nowadays in those scenarios, when it's really necessary to maintain multiple versions, I suppose I'd recommend multiple CI branches.
@@andrealaforgia5066 It's not. When you have a Product that has several versions used by customers at the same time, you need several CI pipelines. Consider Spring Framework project, it has to maintain versions 5.x.x and 4.x.x (and maybe some more minor versions) and they work on 6.x.x currently. Then certainly have several CI pipelines for every release branch that is alive and one for the mainline.
However, when you have a Project or a Product that is served as a Service (i.e. you do not ship your product to multiple customers), and when you maintain just one single version with CI/CD pipeline, then it is different, you need just one CI/CD pipeline.
@@andrealaforgia5066 Under release branch I assume a live legacy branch of a Product where there are still customers using it. If critical bugs or security issues are discovered in such a branch, they need to be fixed. And CI pipeline is needed for such branch to verify that a bug fix or security issue fix does not break anything. So you need a CI pipeline for every live release branch (that still has customers using it). Of course, you can delay branch creation till bug is found in it, and create a branch from tag when the bug is found. But once branch is created to fix a bug, you need CI pipeline attached to it. Verifying if a bug fix did not break anything on developer workstation is little scary for medium to large systems .
@@miletacekovic >Of course, you can delay branch creation till bug is found in it, and create a branch from tag when the bug is found. But once branch is created to fix a bug, you need CI pipeline attached to it.
It's not a CI pipeline, it's a build pipeline. It's different. CI means something specific: Continuous Integration. You don't do Continuous Integration on the releasae branches, you keep them for hotfixes. In general, however, keeping a release branch for every customer, assuming that you have hundreds of customers, is suicidal, a good recipe for disaster. You cannot really expect to have to hotfix a bug on hundreds of branches. You will need to make those customers converge into a new release at some point.
>Verifying if a bug fix did not break anything on developer workstation is little scary for medium to large systems .
What developer workstation? Who has ever talking about developer workstations? Developers' workstations are temporary workbenchs. CI is about integrating developers' work into a shared mainline multiple times a day. Tests run on the mainline.
@@andrealaforgia OK, you agreed you need build pipeline on the release branch (ok, call it build pipeline, as tens of developers are probably not fixing bugs on a single release branch, sure). But that build pipeline is basically the same as CI pipeline on the mainline, it cannot be different. It has to contain the very same tests as CI pipeline attached to mainline (including unit/integration/e2e/performance/contract/whatever you have), otherwise, we cannot be sure that nothing is broken with a bug fix. Furthermore, this build pipeline has to run on CI infrastructure, not on developer workstation. So everything here is the same as in the CI pipeline in the mainline, except that it runs on a code from the release branch, so at the end of the day, calling it differently is maybe not justified.
> You cannot really expect to have to hotfix a bug on hundreds of branches.
Sure not hundreds, but dozen of live release branches on a successful product is not uncommon.
> Tests run on the mainline.
No, tests run everywhere: developer workstations, CI pipeline on mainline and of course on pipelines on every live release branch.
@@miletacekovic You are not doing continuous integration on the release branch. Therefore you cannot call the build for that release branch a "CI pipeline". You are fixing bugs on that release branch, you are not continuously integrating new development. That bug-fixing activity causes frustration among your developers, rest assured, given that they have to apply the same fixes in multiple places, with all the problems that that practice entails. If you have several bugs, discovered for multiple client's version, you need to multiply that bug-fixing activity for all those branches, increasing frustration and fear of mistakes. The idea that you can keep release branches open indefinitely is not a sustainable model. It doesn't really work anywhere. You will need, at some point, to make your release branch converge into master again or you are doomed to eternal sadness.
Stop calling it "CI pipeline". CI happens *ONLY* on the shared mainline of development, nowhere else. You are talking about separate builds that happen on the CI server. It's not a "CI pipeline".
If developers work on a local copy of master and push their changes directly to origin/master, at what point does peer code review happen?
If you need pre-merge code reviews, branch off master and PR to master.
@@gigas10 Dave refers to that as “feature branching” and instructs us not to do that, though
@@Jheaff1 The branch here is just a technicality though. If you do reviews, they will inevitably cause some additional delay.
@@soppaism Exactly. Is Dave suggesting we don’t do code reviews?
@@Jheaff1 It compares to situation where changes would be kept in the local copy just a bit longer before pushing to origin. Probably not a big deal if reviews roll smoothly in the team. Can definitely be an issue, if not...
Thanks for sharing your valuable insights. I think fundamentally, a highly experienced team would have no issues with adopting such practices. When adding lesser experienced engineers to the mix , and a lack of available senior engineers, things can go horribly wrong. If the code requires some refactoring, it's gonna hurt.
Would love to see some real-life examples to compliment your insights... That would go a long way. Happy to discuss further.
And so it continuous, the more we try to make agile work in real life, the more we discover it is just waterfall in smaller steps.
Not really, some teams practice agile that way, but the best teams that I have seen don't. Even at the detail level the approach is collaborative and iterative. For example, on the teams that I worked on, POs would sit with the devs and would see the software evolve as it was developed, if at any point we had a question about the requirements, or they didn't like the direction we were taking, we'd talk about it. Testers tested the software while it was being developed, not after development was finished. So really not anything like, even a mini, waterfall at all.
@@ContinuousDelivery Yes my original comment has been on purpose facetious. However it looks like you are missing the bigger picture here, all the tools that we have in development, CI, CD, unit- testing, agile, XP, V, etc., are not about methodology in principle. They are about automating or at least formalising communication, and responsibilities from the realisation that any work done has dependencies on previous work and all this should be mapped out to a workflow, otherwise you are just hacking around, which is nothing to be ashamed of, just all parties need to be aware of that. The tools we have help with workflow, which tools we use depends on the particular task and its environment at hand. Doing trunk based CI/CD development when you are creating a prototype to confirm viability is wasting resources. Doing gitflow if you don't need to maintain multiple stable releases is also a waste. Doing agile if you have limited access to the project owners (which must include the end user) but are still held accountable to a timeline is a recipe for failure. Not being able to comprehend the overall picture but regardless advocating for a specific methodology is rather naive. I am not saying that you are doing that, but just that I can't observe any evidence that excludes that. Having said al that, I do enjoy watching your videos and on multiple occasions they have given me the inspiration to think more deeply about what I am doing.
what is a waterall?
The way you're describing your process seems awesome, however, it would require a test suite that is reliable, deterministic, and fully local. If you have to wait for a set of tests to run on a Jenkins machine, then you have to wait too long, and figure out who broke the build. since you can't unit test everything(sometimes you need integration tests), how do you solve that hurdle?
The answer is: mix and match. Have multiple test suites: one of which is fast and covers as much as possible which can be run before push, and then put slower tests in a CI server like Jenkins. Those tests do involve waiting, and sometimes you do need to figure out who broke the build, but it's much rarer, and a worthwhile tradeoff.
Where possible, when you start getting classes of failure in the slow tests, try to find a way to surface them in the fast tests instead. Over time the compromise becomes less of a compromise :)
CI encourages fast feedback, unit testing should be able to give you 80%+ confidence that everything is ok. You really shouldn't rely too heavily on integration testing as it's more complex, less reliable, less helpful and too late for things that an IDE or unit test would catch. I like only smoke tests to check basic connectivity e2e. Unit testing to expected consumer and producer contracts is better IMO. Broken contracts is a management rather than development issue.
@@matthewlothian5865 I've seen enough silly changes to (all kinds of) tests made by developers to have learned not to trust tests to reveal issues from other developers
In my experience, having successful CI is 100% dependent on having a reliable test suite that the team is committed to using and maintaining. If you don't have this yet, I would recommend focusing on your test suite first and CI second.
@@defeqel6537 Whatever approach you pick, if the developers either don't understand it, or don't copy with it, they will break it. I think what you are saying is that if you are working with bad developers, you need to make them better. There is no process or technical fix that will correct this, this is a cultural change. You don't get to build good software with bad developers, so make the developers better, whatever that takes. I am trying to do that by explaining the techniques that the best dev teams use.
(P.S. by "bad developer" I mean people who don't do a good job, not "bad people", in my experience it is easy, or at least possible, to help "bad developers" do better).
Commiting to master, several times and ensuring that each commit is stable sounds easy enough to execute, then making a pull/merge request (squash commits?) to make a single commit on origin/master seems like a reasonable approach. But I'm afraid that this seems viable just for solo developers, after all Git was created with working with many people at the same time.
I'm afraid that, the lack of branches will produce a chaotic git log, and probably will make working with many people a nightmare. How do ensure that all people involved in development have a high sense of discipline to keep their changes not just releasable, but stable on every single commit?
This doesn't seem like a easy change to make in a large project with many people involved.
Try it and see.
I've found it really useful and it's the approach the team I work on take. It's just less context switching and messing and we can simply look at the repo to see the latest code.
I worked with 30 devs across Singapore, the UK, and the Western US - all sharing the same big codebase. We managed to work together closely and in all of four years we hardly ever needed to use a branch. We all shared the same codebase - with common ownership (i.e. anybody can change anything.) No need to "fear" anything - you just have to learn the XP way of working in a co-ordinated fashion. Branches are no substitute for working closely with other people. Now I know lots of people fear doing that and don't want to face the possibility of it - but it does actually work.
My "nightmare" is not being able to get rapid feedback about things on separate branches working together - that totally kills my ability to refactor and simplify things. The code becomes very, very hard to change- and very quickly. Working with my 30 colleagues on a trunk is a lot easier because catching up quickly with changes - and learning to make small commits makes it much easier to refactor complexity away.
💯😎
@@dafyddrees2287 Were you all working behind feature flags? How do you release feature #1 but not feature #2 when you have everything in master? Also how do you hotfix production when production is not in its own branch?
@@Keilnoth Feature flags are a bit of a worst case scenario because we don’t want a combinatorial explosion of switches undermining the usefulness of tests on a CI server. The trick is to build things in the order you want to release them and release very often. We did branch for hotfixes - at that point you are maintaining two different versions of the app anyway. We almost never needed a hot fix though - it happened very rarely, like once a year.
I think more than anything, branches and git-flow are more crucial to the project management side of things, I found that having branches with names that may correspond to a JIRA ticket code for example is very practical and easier to audit for a PM or Team Lead for example. It's always about what is practical for your organisation.
your point is great, but i'll dare to push it even further. feature branches allow you to make pull requests! and THEY are practically crucial for management
I have to ask, when is the "CI result" supposed to hit the end user/ system? What system is there out there, where the software gets updated many times a day? I don't know one "end user" of software that gets all updates at the moment they are made and deemed safe and deployable. So, if we can agree, there has to be a split out of deployment updates to happen at different, less often times than CI happens, like at a minimum several days, then we can understand Gitflow better and how it can work with CI. This release cycle is where the bundling of the updates that were "CIed" are pushed out to the "users". In gitflow, that is the move from dev to master. So, CI happens in dev. dev is the "current version". Master is the version pushed to end users (and thus behind the current version almost always). So, to me, Gitflow makes perfect sense with CI too, where dev is the CI'ed branch.
The other thing I am missing is the "mistakes" that might be made. Sure, the end use of the program is the feedback, but again, you can't afford to have users continuously stopping their business work, to test the changes in production. Usually, you'd have a stage set up, where you'd ask them to test in. Usually it is in sync with the dev branch. Or, there might even be a QA branch. Branches are hiding changes. They are copies. And they can easily be updated to match dev (which is a common practice too).
So, I'm not buying this. I think a CI straight into end user systems never happens or rather is a rare animal, thus the premise of the discussion is wrong. I don't get daily updates on my Windows machine. I don't get daily updates of OSS software I use. And, I don't get daily updates of my cell phone's OS. Etc. Etc.
IMO branches are not great not once I've been fixing issues caused by people forgetting to update one or the other branch!
@@1oglop1 Are you using feature branches that depend on other feature branches? The way I understand feature branches is that new features are based on master and feature branch is merged into master as soon as it's considered final code (note that the feature may not be complete but the code that far is considered good enough to take responsibility).
This was well presented. I do, however, notice that anti-gitflow and pro-trunk discussions often give very little treatment to variations in developer quality and experience and how to deal with them humanely. Also, requirements from other departments and customer driven priorites (ie. bugs and pilot features) are seldom linear in nature or time, in contrast to the commit log in git.
No amount of software or automation can adequately replace team members actually communicating with each other. So CI/CD, in my view, can never be the silver bullet to solve all dev issues. Process is more important than software. While gitflow has its faults, and has reached its sell-by date, it was a godsend a decade ago when most teams were still battling to understand git itself, let alone how to actually manage code with it. One thing I do completely agree with, though, are the statements about feedback cycle and its importance. But that was true even before the advent of gitflow.
The software industry isn't settled at all. Typically 50% of devs have less than 5 years of experience. So I think your point is still head on.
I have an issue with trunk based workflow. How to collaborate on more exploratory and larger features with multiple people, while development on main trunk goes on. I would branch from branches, merge commits from other branches, and visualize it all in branches. When it comes time to merge our feature, we can boil it down to a few self contained and more-easily reviewed commits.
Thankyou so much. It's really useful to have a place I can refer people that mandate gitflow. (I had to revise this comment several times to remove swearing.)
I just do what makes sense for each individual project.
I generally have two branches: in-progress and stable. In-progress for things that aren't ready for release, and stable for things that can be shipped to the user.
But every project is different - different scale, different team, different goal. And what works for your project will probably need more thought than a 15 minute video can determine by itself.
Why don't you have just one branch and just mark stable code with a release or version tag?
@@harleyquinn8202 Because a lot of the time, people (like myself) download and compile the source code directly from the repo expecting it to work, and if it's not at a point where it's fully functional or even compiles, that's pretty disapointing.
Anyone who uses Arch Linux is familiar with git packages, where installing an application or library does exactly that; download and compile it locally before installing it, rather than using a pre-compiled binary.
it sounds good in theory, but it's not easy in practice (ie. juniors, unmotivated people, culture issues). I like github flow with a ci/cd spirit. use a branch to write your code, but it's encouraged to merge 'incomplete' features as soon as possible... at least it gives you the right mindset when it works well, and it naturally falls back to traditional github flow when it doesn't!
It is astounding how so much of the history of software engineering is focused around **re-discovery of the past** in the sense that things that were simple once but got murdered by senseless addition of useless complexity, are now being revisited and reconsidered as the best way of doing things, but with some reticence, mostly towards seeming... "old" or... "conservative". I call that BS. It's just ego and closemindedness. Probably mostly enforced by corporations... Thankfully programmers are generally a smart bunch and will eventually find the best solution, and channels like Continuous Delivery do help a lot to fast forward that evolution.
I'm using gitflow in the current project. For some reason (you know, legacy, no tests, etc.), we can't switch to the proposed method (and CI in general) yet, but we're aiming to. And I have to say that gitflow is great comparing to the lack of any process, where everyone was merging something and at the 'release day' features we needed were cherry-picked to production with constant reverts because of bugs. After introducing gitflow (although not perfect) we can finally take a breath. So I agree with everything you said except the title. It's not ALWAYS a bad idea, sometimes it's a step forward.
You'd be surprised how easy it is to shift to TBD from that state. The code has been tested in production. You don't need high test average of the existing system to switch. All you need to do is have good testing for every change going forward. You commit to "we will never push untested code again!"
When I've helped development teams in this situation, we've been able to transition their legacy code in weeks.
@@BryanFinster I also transformed one of the projects as you said, however, this one is quite unique. It'd sound strange but we just can't test some of the changes automatically and be sure that they'll work as expected, even on testing environments etc. On the other hand - the system handles thousands of requests per second and in the current state releasing changes multiple times per day is quite expensive. All I wanted to say above is that gitflow is not bad. There are many things to improve in my case and this way of working is one of the less important to change I believe.
@@comodsuda what we found was that solving for this required improving many other things that improved the overall ability to deliver. It acts as a constructive constraint to uncover problems we are numb to.
I empathize with the legacy issue. There’s quite a bit more involved than “just don’t branch” when you’re dealing with a multi-team 25 million line monolith made up of 2 decades of untested code. We decided to methodically re-architect to improve our ability to deliver. It takes time, but there is payoff for the org and the teams.
@@BryanFinster I'm glad you managed to do that with your team. I think we have totally different contexts :)
@@comodsuda Everyone does. It was a bit bigger than a team though. :)
I don't buy "everyone do this" narratives. TBD is a good practice, but it is not a universal "everyone do this" practice. Open source projects and many internal teams use gitflow very effectively. It often is best. It depends. Beware of claims that there is only one "best way".
We mainly use the feature branch and the develop branch for creating features, however we use release for end-to-end testing. Such as load testing and full functional testing, going through all the quality checks.
Integration testing, unit testing and vulnerability scanning happens on all branches.
But personally I prefer only having a master branch and multiple features.
I definitely like that this channel publishes thought provoking ideas. But these ideas are in a bigger context. I've seen many code bases that if they just pull in the advice from this video they will break their whole flow and not understand where it went wrong.
Things I think you need to do before adopting this idea:
1. Have several suites of unit, integration and e2e tests.
2. Have a feature flag oriented approach. - Here is where automated and manual testing is dependent on
3. Avoid refactoring. The context would be that you need replace a certain library that you didn't implement abstractions upon ( ex: using directly components from libraries that after some time get depreacted, happened in Java, Angular, React). For that you would need to reach a code freeze moment so people won't use the old library).
Take for example hotfixes branch. You develop the hotfix, how does the tester test the hotfix ?? Do you merge it directly in ? No you have tags in production. Meaning that the tags are stable the in between tags are not automatically considered stable.
I totally agree. Gitflow is antithetical to actual CI. I have tried to change many teams’ process but it never, ever works. People agree what I suggest would be better, but I can’t get around the organizational inertia.
Yep. The project manager, business analyst and project owner all have to be on their game to support such a workflow just as much as the development team needs to be.
If you're not doing pair programming which I know you're a big proponent of, how would you reconcile a "pull request" type workflow without (even small) feature branches?
the benefit of pair programming would be the extra eyes to review. Even with that, the team lead is usually responsible for approving the PRs from my experience. Without pair programming, I would expect the team lead to be doing reviews...as well as other members looking over the PRs to help catch things as well.
Mature testing and feedback loops. If it builds and passes tests its good, refactoring can still be done in another iteration. This is a cultural thing a team will need to get used to. With this in mind it's crucial to make sure code is easily testable (TDD can help) and maintainable (Loosly coupled, highly cohesive, modules) as iterative refactoring is expected and encouraged. There a many design patterns and principles that can help keep an application refactor ready.
Yeah i wonder how to review the incoming coming code so repadly.
This is my biggest problem with the video (or more specifically his videos against branching). If nothing else, feature branches provide a workspace where developers can back up unfinished work or screw around making changes that don't necessarily compile at any given time and such. I think CI purists can go too far encouraging everyone to commit line by line to master. Not every line of code is an immediate improvement to the underlying system without extensive additional work, testing, etc. Nobody's going to commit a multi-year update to a missile guidance system directly to master, even locally. The feature branch is where the feature lives while it's being tested, reviewed, etc. Not every change is a one-line CSS update from 12 to 14 point font on someone's personal web page. I realize he did propose 1 day as the threshold to decide what gets a feature branch, but given that feature branching is so trivial and cheap and offers lots of practical organizational benefits, I just don't see a case for not using feature branches on anything but the least consequential projects.
Also, as much as I hate things like hot fixes and different tracks (master, dev, beta, etc.), there are practical reasons why these are sometimes necessary, such as supporting a one-off customer with a security vulnerability stuck on an older version, or regional regulations that effectively demand different versions of the software. That's stuff that CI/CD purists can't really hand-wave away. I think the principles are extremely important and practical, but I get tired of hearing CI/CD evangelists describe every software project like it's a static web page or a small API, when my whole career has been spent on systems that take a full day just to test, review, and merge, all after the changes are considered finished by the developer.
@@davidboeger6766 CI as a practice is not for every project, no silver bullets. Without the culture and enabling organisation / architecture it will be difficult.
The main goal of CI IMO is to restrict a freedom of delivery in order to simplify and streamline the process. The restricted freedom is this principle "There is only one working version of the software at any time". This makes reasoning about many other parts of delivery much simpler (but maybe not suitable for your org). Everyting is an iteration.
CI can be a difficult paradigme shift, similar to waterfall -> scrum, imperative -> functional, monolith -> microservice, branching -> trunk based
The "Waterfall" and "Wheel" development paradigms are constantly trying to sneak into Agile.
No amount of good planning beats out user feedback. Just engineer egos clouding their judgement.
@Peter Brown and yet it is consistently proven that waterfall does not work as well as even a crappy implementation of agile
@Peter Brown Hmm... I'm not sure agile means what you think it means. To me, agile has the same steps as waterfall, however, you design very small features and implement them rather than designing the whole system upfront then coding the whole system.
FDD
I agree with you to a large extend. However, I do see a point in having a development branch (the CI branch) and a master branch (the production branch).
I work on embedded systems (in particular, in the automotive industry [on e-drive control]), where we have software tests, hardware-in-the-loop (HIL) tests, and finally fully integrated tests on an assembled e-motor.
So for day to day development, I agree, it's best to have one CI branch where everyone commits to. Software tests (unit + integration tests) can be done automated for each commit. That works great!
However, in the automotive sector, you also have HIL tests, where you have a very limited number of HIL devices. A set of tests takes a few hours; so, doing this for every commit on the CI branch is often not realistic. It's even worse for the final tests, they take much longer.
As a result, it is useful to have a temporary release branch (like in git-flow) where you do those tests at the end of a sprint. When all tests pass, then that version is committed to the production branch (like in git-flow), where all the other departments can get always the latest stable version.
This production branch has one advantage (over just a tag on the CI branch): Clients or members of other departments always have the latest tested/stable version. This gets particularly important because they are not always good with version control.
Regarding synchronisation between the production and the CI branch, I agree that git-flow does it wrong. Any code change should only be done in the CI branch. Hotfix branches are a big no-no. IMHO, there should be only one direction on how commits come into the production branch -- always from the CI branch. Then, you don't have a problem with diverting branches.
In your case, I think the only limitation is that your code will be releasable only after passing all those tests. But that doesn't prevent you from using a single branch for continuous integration. The changes can go in as switched-off features and be switched on only in the test environment.
In "lean" terms all the superfluous git flow branches are inventory we're holding onto. (Feature branches are pure "muda".)
This, so much, this!
An interesting idea which, like everything else about continuous delivery, is completely wrong.
Does Toyota change their manufacturing line every day?
Do they change their suppliers of components every day?
Of course they do not. They make minor changes ("hot fix") only when necessary, they make significant changes only once or twice a year ("minor release/model year"), and they make major changes only every few years ("major release/generation").
If you wanted to try to translate continuous delivery to the automobile industry, it would mean every car is built differently, with no regard to interchangeable parts, and you'd have to recall every car whenever something went out of date.
@@fluffysheap that analogy just doesn’t make sense.
I agree that Vincent's statement was respectful. I remember reading Jimmy Bogard-the creator of C#'s Automapper-blog about when to not use Automapper. Having the creator's candid input is very insightful and useful to stop bad or smelly practices.
I think continuous integration is a good idea, but I think pushing directly to origin/main (or origin/master) is a bad idea. My preferred way of working is to split backlog items / user stories into small (mostly) atomic tasks that aim to introduce one small addition. When starting a task we create a task branch, that is short lived. When we are ready to integrate we create a pull request and another member of the team peer-reviews the task. I don't care how senior or seasoned a developer is, nobody pushes directory to main. All developers are human and everyone makes mistakes. By peer-reviewing every single addition to the code base we catch these small mistakes early. When the team works at full speed each developer can still implement multiple tasks in a day all the while reviewing tasks from other developers. The added benefit of this is that you get to read other people's code daily. That is a great way to learn. Maybe someone knows a nifty trick to tackle a certain problem. When you get to read this code then you learn this nifty trick too. Reviewing is not just about finding mistakes it is also a great way to spread knowledge.
>I think continuous integration is a good idea, but I think pushing directly to origin/main (or origin/master) is a bad idea.
That's a contradiction :) It's not CI if you don't push directly to the main branch of development multiple times a day.
Note that CI and trunk-based development are the same thing.
>My preferred way of working is to split backlog items / user stories into small (mostly) atomic tasks that aim to introduce one small addition.
That's great.
>All developers are human and everyone makes mistakes. By peer-reviewing every single addition to the code base we catch these small mistakes early.
Sure, and that's why CI is not removing the benefit of code reviews from the picture. It's only advocating a different way of reviewing code, through continuous code reviews that happen *while* developing, and not at the end. There are various disadvantages of having PRs at the end of development phases: it's extremely hard for a reviewer that has not been involved in the development of a feature to get a good understanding of what the code does. You haven't seen it working live, you only have a bunch of files to statically analyse. The risk is that reviewers only skim through the files for a superficial validation, trusting the creator of the PR (especially if she/he is a senior member of the team who knows the system well) and coming up with a "LGTM". This is were PRs can become really dangerous tools.
It is much better to use pair/mob programming and continuously review the code while working on it.
>The added benefit of this is that you get to read other people's code daily.
Is that a benefit? Having to stop your development activities to read other people's code of which you know very little?
>That is a great way to learn.
Sure, but learning through collaboration is 10x better.
I has worked well for me to take a break from what I'm doing to look at someone else's work. I gives be am opportunity to step back from what I was doing. It often gives me new ideas or I might realize something that I wouldn't necessarily have though about if I was just doing what I was doing.
Also keeping the diff small helps. And you should always checkout the branch you are reviewing and look around the code, not just the diff. You can try building and running it locally while you're at it.
I have a question, do you do automatic testing after the merge and before allowing customers to use the software? If yes, during the time of testing, what software is served?
Thank you Dave, for highlighting my favorite CD topics. I'm gonna promote them for my team.
Exactly. Great explanation! CI is where Feature Toggling becomes even more important, where maintaining multiple features becomes a matter of a condition within the code, not a branch... Continuous Integration, Continuous Deployment and Delivery on demand (Continuously deploy into production, and toggle features on when ready for delivery to the end user)
"Represents the reality of Software Development" - What reality?
Branching isn't complicated or slow and it certainly doesn't prevent continuous feedback. You can always choose to merge other branches into your working (or feature or w/e) branch *at your discretion* .
The "reality" is different for each developer, team and organization. Say you have a testing environment that runs in parallel to your production environment so your non-technical stakeholders can provide feedback and are free to experiment themselves. Do you really want to deploy these two different environments from the same branch? If yes, you just made things more *complicated* in the real sense of the word. You are tangling up two things that should be separate and simple.
Another reality is that you might have fluent, constant communication in your team and a codebase that allows for separated features, modules and abstractions to be developed independently. You communicate and know in advance that they won't intersect in critical/logical areas, but only in the plumbing. It becomes useful to separate these working items into branches, because merging/coordinating plumbing code is straight forward, but becomes tedious or even inefficient if you need to do it constantly because you don't know yet how to connect the dots before certain parts are finished.
So in conclusion, I find this advice useful if modified this way:
If you work in small teams, direct communication between developers and other stakeholders is guaranteed, then use the branching strategy that fits your needs AKA "the reality" and don't just follow a predetermined pattern (like git flow) but make it as simple as it can be, but no simpler. Strong conventions and rules can become useful only if you need to context switch between many different teams and projects. Otherwise just use your tools and adapt your processes to your reality.
@@andrealaforgia5066 processes and tools cannot substitute communication and engagement with your coworkers. There is not one size fits all, no silver bullet is what I'm getting at.
The beauty of git is that is doesn't inherently prevent you from merging or branching. If you need to branch, then do it, if you need to merge, then do that. It is a highly dynamic system. Using it should be driven by actual needs, not arbitrary rules.
Saying that rule/methodology X simplifies things begs the question: Under what circumstance?
Simplification is not subjective. It means you are disentangling something that should not be intertwined. The subjective part is the "reality" that you model and work with.
@@clickrush >processes and tools cannot substitute communication and engagement with your coworkers. There is not one size fits all, no silver bullet is what I'm getting at.
Again, this is a typical logical fallacy, black&white reasoning. Who ever said that CI is a "silver bullet"? CI is a way of working that has proven to be better than other ways of working to develop software. Period. No one has ever stated, in any books/resources/articles about CI, that CI is a "silver bullet". People keep rejecting CI and trunk-based development putting a lot of emphasis on communication, like communication were the only thing a team needs in order to deliver software. A team needs to be able to continuously integrate their work. That's the point. CI is not substituting communication and engagement with your coworkers.
How is a long-lived feature branch approach fostering any communication, given that it's a way to hide your changes and silo your development? Developers adopting feature branches often do not communicate for days and days, only to discover problems at the time of merging their changes.
>The beauty of git is that is doesn't inherently prevent you from merging or branching.
I don't see that as a "beauty". This video is not about git, it's about GitFlow. It's different.
>Saying that rule/methodology X simplifies things begs the question: Under what circumstance?
How much do you know about CI, which has been going on for almost 2 decades, and all the studies about it that prove it's the best way to develop software we know so far? Read "Accelerate".
@@andrealaforgia I wasn't arguing against CI generally. I was questioning the notion that one particular way of using git "represents reality" for all, and was giving examples where you make things more complicated if your model doesn't match your circumstances.
What may happen if you don't separate work into branches on the VCS level is that you are separating it on the code level. You introduce configuration and (ad-hoc) logic in your code base so you can accommodate staging environments, beta/prototype features and so on. Which means you need to test that code too, which means you blow up your code base just so you can avoid branching.
It's a tradeoff. In some cases this is great, in some it isn't.
Again, my point is not against CI generally. It is against big claims of how people should use their tools by making statements about "reality" and "best practices".
And I didn't want to say this at first because it shouldn't matter, but I don't need to be convinced of simple branching models and CI, I/we actually use CI most of time, probably over 95%, except when we don't. When we need a branch for something then we just branch instead of coming up with a convoluted way of avoiding it.
@@clickrush >I wasn't arguing against CI generally. I was questioning the notion that one particular way of using git "represents reality" for all
I see a contradiction there. CI does dictate "one particular way of using your VCS". The definition of CI is "practice of merging all developers' working copies to a shared mainline several times a day" so if you're not questioning CI, you shouldn't be questioning trunk-based development either, cause CI and TBD are the same thing. Nobody is saying that this particular way of using git represents reality for all. What has been said is that if you want to implement CI, you need to give up ways of working that are antithetical to CI, and GitFlow is one of them for the reasons exposed. You are still free not to do CI, though.
>What may happen if you don't separate work into branches on the VCS level is that you are separating it on the code level. You introduce configuration and (ad-hoc) logic in your code base so you can accommodate staging environments, beta/prototype features and so on. Which means you need to test that code too, which means you blow up your code base just so you can avoid branching.
Absolutely not. Have you actually ever tried trunk-based development + feature toggles? It's much easier than you'd think. When feature toggles are inactive, you can consider the code they hide as not there at all.
Separating the code physically (feature branches) offers less benefits than separating it logically (feature toggles). The latter approach at least makes sure that the various streams of development are integrated, the former doesn't, and the longer those branches live, the more they diverge from each other and master, the riskier it becomes to merge them into master. You can switch features on in your specific test environment and do all you want. It's much cleaner and simpler. The ability to integrate work and the ability to test/release features are two different aspects of software development.
Note that you say "you blow up your code base just so you can avoid branching". First, you don't blow up at all your code base, quite the contrary. Second: the purpose here is not to avoid branches, but to fulfil the definition of CI. The fact that branches are avoided is a nice side effect.
I have some points which in my oppinion supporting the idea of feature-branches and they are mainly about QA:
- Code-Review: A pull request from a feature branch to develop or master can very effectively be reviewed. The reviewer does not need to go through all commits that were made in order to create a feature but only the diff which is present at the end
- Testing & Review: If you feature lives on a branch, a Tester / Product Owner can review the version on this branch. If bugs are found or things are missing we do not have that "broken" state on the master but we can fix it on the feature-branch. I think this helps towards having a stable state on master which is always releasable.
Dave is missing many points here.
First of all, he is putting "continuous deployability" on a pedestal. In reality, most companies couldn't care less. The ultimate goal is to support the business and most of the time deploying rarely like once a month or quarterly is completely fine.
Secondly, he is talking about potential conflicts and having out-of-sync copies of code. If team members are using common sense, these things happen very rarely and are resolved swiftly.
In general, we should try to avoid marking tools as "bad idea, period". Both gitflow and Dave's idea of continuous integration are viable strategies with distinct characteristics.
@@andrealaforgia5066 Thanks for sharing your opinion. I would gladly hear more. Since so many people are favoring Dave's approach there has to be something valuable there, even though I cannot see it yet.
a) 100% agreed that teams should integrate their work often. I am using gitflow, and everybody is integrating their work often(small PRs => short-lived feature branches + every PR is build/tested before merging to develop). It is hard for me to imagine, how giving up feature branches is better. I am happy to learn though.
b) I may have just not experienced the problems you mentioned. By common sense, I mean stuff like talking to each other and recognizing that if you are working on this module, I will just do something else in a different part of the code. If there is a shared piece, maybe let's pair program a common part first. Again, I cannot imagine, how such an approach would leave to any substantial problems Dave is mentioning.
Look at this! A sane comment in a sea of trunk-based zealotry. It's refreshing to see some nuance.
I still disagree that GitFlow is incompatible with CI. It may be incompatible with CD, but I don't really thing it's a bad thing. Not every company needs CD and far to many companies trying to have CD when they don't really need it. On the other hand I would prefer GitHub flow
Feature branches makes testing difficult. The sooner you merge to master, the sooner you find issues. Fail faster!
Not if your test harness on master branch runs for 16h+ (SW+HW simulations). Just imagine running all tests on all hardware platforms for Linux (quite successful 30yo project) after every single commit. CI/CD is OK for small, local teams (feature branch maybe?).
We simply have separate test environments for every team/feature.
This week I had the opportunity to start testing trunk based development with my team. Thank you for the valuable information.
How do you feel about CI or even CD in open source Projects? How can you organize and achieve it there?
What about validated environments like heath related businesses (pharma, hospitals)?
Here each released and used version needs to be validated (sometimes even by outside parties).
How would CI / CD work here?
Would love to hear you input on these!
GitHub actions? Travis CI? Many open source projects have integrated CI, with CI build state badges, some even with Code Coverage, Static Code quality analysis, Static Code security checks, dependency checks... all free for Open Source projects.
@@miletacekovic I know about the software solutions for Automated pipelines. These are tools to help facilitate CI/CD. They are not continuous integration itself. I was not talking about the technical aspect for open source.
But usually open source projects get contributions by being forked and then having a pull request accepted. And, if you saw the video, this is not true continuous integration (CI), since it is basically creating feature branches.
That is what my question is directed at. How do you organize it with many distributed people.
Or even harder in my opinion in validated environments.
@ Simple branches with pull requests are fine in that case, when you objectively cannot organize pair programming and must do pear review. But then pull requests are better merged into main, no need for Master and Develop and all that complexity.
@ To me, CI makes sense for core contributors who aren't operating on forks, not for external contributors.
Interesting, any thoughts on how to manage auditing as part of CI? We have get peer and independent review of each feature branch prior to merge, and audit those reviews prior to release (random sample testing etc). We maintain multiple production versions (mostly due to deployments air gapped), so I can't even approach CD, but do see CI as a better concept for developing at a higher cadance.
If you can get the audit department to accept the CI server and CD pipeline as good enough, you can do trunk based. Pair programming is great for review, but sometimes that's not accepted because it's hard to audit. If you need the source control system to show a log of reviews, then you can use very tiny feature branches. Basically, the branch should be open only very shortly, for a very small change. This way you can still integrate multiple times per day.
That's at least how we solve it in an audit heavy world. Also in CD we are including a risk based change approval flow, connected to the service management tools, which sometimes requires an approval before getting deployed to production. The product owner then gets notified via email and has to approve. Risk is determined by types of change and change sizes and such.
Please, make a video about clean code (the book). Personally, I don't like it. Give us your thoughts about it. It would be interesting.
Personally I liked it, but still would be interested in Dave's thoughts 🤔
What's wrong with Clean Code?
Are you referring to Robert Martin's books?
@@EngineeringVignettes Yes
I love this book. Not sure why people dislike it. But it is just one book amoung hundreds who think they are right
Hello Dave, thank you for the video. The idea is quite clear, but I'd like to ask a question to clarify one thing. If we have a new application to build and it will take months before an MVP release is ready. And of course we yet have to write any Integration/UI tests before we even can start doing CI/CD. What branching strategy to choose?
I choose Continuous Integration, sometimes called (Trunk Based Development). The really important thing in this phase is to spot problems as quickly as possible, because in this phase you will make a lot of mistakes as you explore the problem, and your solution. So CI is EVEN MORE VALUABLE at this time.
Why not just use Google Docs for your version control? MINIMUM cycle times, MAXIMUM deliveries.
Or NFS where everybody car read-write at will :D
I believe feature branch is a branching strategy within your local development, some how ppl tend to push this feature branch to origin and never remove it after merging to develop branch. Even feature branch push to origin server as backup purpose, it must be housekeep and remove after project/CR finish. I think most of the developer not quite get use to this distributed concept, and practicing it like the old day client-server approach, eventually every local git is also consider a server node.
Push origin master? How do you handle code reviews and ensuring quality? "It works" is a dangerously nebulous term... It compiles? Great. But does it actually _work_? And if it doesn't, what then?
If you use pair programming, there is no code review. And pair programming provides much better feedback than code review
Imagine all work like that without any PR just push origin master in open source project and some popular frameworks...
Git Flow is great. Instead of having little feature flag turds all up in your source code, the feature flags are feature branches that are only merged into production once they're ready.
Hi,
I find the reasoning behind pushing small changes into master convincing in terms of safety and integration, however, I don't understand how I can have a Pull Request if I push my changes directly into master.
Isn't this too big of a trade off? Maybe its better to create a feature branch even if its just for 2-3 hours, in order to have PRs?
pair programming, or yes just create a mini feature branch. the idea of CI is not about no branching at all but about commit (merge) frequently but branching just tend to become long live so we want avoid that
TBD allows for a few feature branches, just that they must all be merged into trunk within 24 hours of creation.
Oh man, this is what I've been telling people for years. That flow creates so much unnecessary work, complicates code reviews and leads to many frustrated hours during merges (sometimes making merge impossible)
I am SURE that I am misunderstanding something about Continuous Integration as you describe it now... My question is about development on a non-trivial, wide-reaching, breaking change/feature/spec. HOW do you pull in the current changes from other devs while you're actively changing what they are changing. Won't you be repeated your conversions multiple times a day? Do you need to engineer a SHIPPABLE transitional state as you move toward the new, breaking, end-result?
You don't "pull in" those changes. You and the other devs work on the same codebase (Continuous Integration Trunk-based development). Working on the same codebase and committing micro changesets multiple times a day, you break down work more easily, hiding incomplete features behind feature toggles, and avoid merge hell.
@@andrealaforgia - Yes, unless the Gitflow project is ruled by an "iron fist", it does become a _merge hell_ as more branches are created and changes start to occur on a released product.
CI merge change deltas are small so potential merge conflicts are minimal, if any.
I think one of the toughest challenges, when moving from waterfall to CI, is in the breaking down of work items into smaller pieces, which requires additional discipline and effort. A single waterfall work item may even be an epic in a CI equivalent...
Just my opinion though...
Cheers,
There is an exception to every rule. Major groundwork changes may still need a branch.
Feature Toggles/Flags can isolate changes until they are complete, but teams have to be diligent about maintaining compatibility, using an expand/contract approach, and cleaning toggles up later
@@soppaism not even needed for that. Major groundwork can and should be done in a TBD way.
Dave's videos are really a great insight to understand the basics of software engineering. I have a few questions after watching this episode.
How do we do peer review when working on master branch directly? I understand that pair programming is an effective way to improve code quality, but does that eliminate the need for peer review? Is peer review an overrated concept?
Yes, it eliminates the need for peer review, because you have a constant “peer review” during construction. I have worked in several different regulated industries, all of which required peer-review, pair programming counted as peer review in all of them. The quality of work produced by pair programming is certainly, measurably, higher than code without pairing. I haven’t seen any academic studies of “pair vs peer review” but subjectively, the places where I worked and did pairing built better software than the places where we did peer review.
It is better to have the review happing while you are writing than after you think you are done.
@@ContinuousDelivery The problem is that a lot of developers don't like pair programming.
@@a544jh It is sad that people have been lured into our industry thinking that they don't have to work in a team or interact with people, which is at the core of software development. Ensemble working is the better default approach.
@@a544jh in my experience the problem is that a lot of developers haven't tried it. My experience has been that the majority of devs prefer it once they have tried it, and a small minority, less than 1 in 10, really dislike it.
I cant agree with the title nor some of the content of this video…. Its simply misleading to say that gitflow is bad, since it works for so many teams and devs. In our team we maintain several environments (dev / test / acceptance / master) which each have their own testers. Some of the features (so feature-branches!) get accepted in dev before they go to test and acceptance, while some may be turned down. Similarly, this happens in the acceptance environment before going to production (master). In this case it’s easier to maintain environment branches and the individual feature-branches to eventually merge them in the target branch when it has been tested and accepted by the end-users of the environment branch prior to the target branch…. It’s not easy to explain it in words, but simply saying not to use certain techniques without nuance and ignoring the use cases it may have smells like bad teaching to me!
TBD is more for agile organizations that appreciate fast feedback. GitFlow works better for gatekept waterfall-style and trust-lacking environments like yours, which is fine.
How about feature toggles?
Question...are different branching strategies for different stages in the SDLC? Or are people able to do this CI strategy from first commit? I've mostly worked in git flow houses because the maturity of the developers/project managers aren't there to support single branch workflows.
I see how from developer standup, this works and is beneficial...but from experience...how do we get teams up to working this way when they are often new to agile, git, or enterprise software development altogether.
Agreed, if your team is allowed to break the trunk then CI will not work.
It can definitely work from the first commit (though I would recommend getting that first commit in quickly as multiple people creating the build scripts/tooling will cause friction).
We introduced GitFlow to bring some structure to our branching. Turns out stopping to branch does that too.
Dislikes from junior engineers. Ironically did not take this video as a learning opportunity. - video was clearly described and well argued!
I remember back in the day when we were using subversion, we committed directly on master (trunk or whatever what that name was at the time), and others had to pull and merge before even considering to commit. And you did not want to catch up too late, otherwise you were running into merge hell AHAH Thinking about that today in the light of this whole video made me think it was not so bad...
When I first saw the gitflow diagram I felt sick at the sight of all those arrows. Every one is a potential merge hell. It's great for the "muggles" (non-developers) that worry about what's in a release but never have to use git directly to resolve a merge.
Why would solving a merge become a problem... Lol....
@@zauxst You obviously have never used “MercilessRefactoring” - you must just leave inconsistencies and design burps build up everywhere… or spend almost all your time merging. You have never tried to do XP and CI properly. Lol…. (Why are devs so soften arrogant pricks?)
@@dafyddrees2287 feels weird saying to someone that is a "devops" by trade that "you never have tried to do CI properly". Anyway, it was a question, no need to put your hand deep in your arse.
@@zauxst I meet loads of people that do devops and dev “by trade” that haven’t ever learned to do things the XP way (including CI.) It’s pretty rare and getting rarer. You’re the one with the attitude problem here mate with your supercillious use of “lol” after demonstrating clearly that you don’t understand why lots of merging would be a problem getting in the way of “MercilessRefactoring” (yes, it’s a thing - if you dropped the attitude long enough to learn about it you’d answer your own question.)
This is why I like the term 'continuous separation' for WoW like git flow. Git flow also reminds me of how we worked in the past and this led to a very late integration of changes causing a big effort in getting a working version of the product.
Thank you so much for this. I've been arguing against GitFlow for ten years. Next please debunk versioning using release dates or git commit IDs.
Just curious about debunking versioning using git commit IDs? Why is that a bad idea?
Please don't use animated backgrounds... they are very distracting.. just do a standard office background. Great content here, but the presentation (background, wardrobe, etc) can be improved. Great work Dave! :D
I don't mean to sound contrarian, but I feel like you didn't do a good job of articulating why gitflow is bad for CI in this video. You seem to imply that it makes it harder to test your code and automate that process, but there are tons of tools out there which can trigger automated tests whenever a pull request is made. Why wait until the code is merged to run automated tests?
Additionally, you mentioned that working directly off master gives developers more confidence that their changes are release ready, but this seems to make three key (and often incorrect) assumptions:
1. Tests are thorough and correct
2. Code is well written and meets the company's standards
3. Developers are only ever working on one feature at a time
In reality developers are lazy and rarely test their code thoroughly, new hires will often write bad code, and developers are often forced to context switch regularly between tasks.
Well, my take on this, if you allow me, is that CI forces everyone to take a different aproach in how you develop software.
For CI to work, everyone must learn how to break down things in smaller and releasable changes, and commit those changes regularly. And that this different aproach is overall beneficial and a better, more efficient way, of developing software. Not because someone says so, but because people who have worked properly with this different strategies found that with a CI aproach you create value way more often.
It's not, by all means, an easy thing to accomplish, specially with bigger teams. But it doesn't mean it's not worth doing it.
If you've got simple code and can count on everybody using the most current version of your code, then CI seems like it might work out. As long as you know if the code is correct and reasonably secure.
Honestly, if the code is that simple and short, then it doesn't much matter how you're handling the revisions, it'll probably work. But, if you've got something as large and complicated as an operating system, I'm not even sure how you would be able to apply CI in any sort of sane way.
Sometimes, the best thing to do is to just use several branches and be done with it.
We developers will do what we are incentivized to do. It sounds like the developers you work with are incentivized to use Grenade Driven Development where they are treated as a glorified typing pool with no responsibility for outcomes who toss the results over the wall for others to suffer with. GitFlow may hide that problem, but it's not fixing it.
@@andrealaforgia5066 "There is really no value in running tests on individual, isolated PRs. There is much more value in running tests on integrated code."
In reality, as a matter of best practice, feature branches should be regularly pulling from the integration branch - yes, at least daily. That's where 'continuous integration' happens.
With this pattern, the integration branch should always build & pass all tests and merge conflict resolutions should never have to happen on the integration branch.
The 'one branch' advocates are defining continuous integration only as regular (i.e., daily) deliveries to the integration branch. With a feature branch methodology you still do continuous integration by regularly pulling _from_ the integration branch.
The distinction is at some point just a matter of religion or favorite color, as working with one branch but using a local repo is just a different means of state separation, just as a branch is. Each means of separating state simply has different pros & cons.
I am quite confused about why trunk based would be good at all.
Imagine the following scenario:
John creates some changes, commits them, and they work locally.
Barbara creates some changes, commits them, and they work locally.
John pushes, however it doesn't work in a preview environment, and requires changes.
Until John is done fixing his changes, Barbara is unable to push, since her changes will fail as well due to John's changes.
This could delay Barbara getting feedback for several days in the worst case scenario.
With feature branching, John will push to his branch, Barbara will push to hers.
The CI will do an automatic merge from master into the feature branches, and both their applications are published to a Preview environment.
Barbara finds out her code works, John finds out his doesn't. Barbara merges into main, and John gets those changes on his branch.
John can then continue to work on his changes until they work, and merge into master.
At all moments in time, both John and Barbara can test their changes, no delay.
What is the problem with such an approach? I see no downsides.
The first scenario that you describe is telling you the truth, as long as John's changes are in place the code is not releasable. So his job is either to fix things as quickly as possible or revert his changes. The second scenario is lying to you. John and Barbara both think their changes are good, after all they are working on their feature branches, but as soon as they merge them together, all hell breaks looks and nothing works. They don't find this out until much latter in FB than in TBD. That means that the amount of stuff that they are attempting to merge together is much bigger, and so more complex, and so harder to figure out what goes wrong.
The problem with this approach is that the data says that FB produces software more slowly, and the SW it produces is less stable (more buggy). CI produces better results. You can find this in the DORA data from Google, and read about it in more detail in the Accelerate book.
@@ContinuousDelivery I'll make sure to read about both.
Wouldn't the hell only break loose when both people are working on closely related elements of the same system?
In that case, it could also be an idea to have both people working on the same feature branch. That way they can still see the changes working together, and you do keep the benefits of having the branch separately, such as being able to have a proper review process.
The only problems I have ever experiences with FB is when a useful feature (such as a library) was added that you want to use in your feature. I have since solved this by getting automatic branch updates.
Using FB releases can even be automated even more.
You could automatically deploy feature branches to a preview environment.
You could automatically deploy pushes to main/master to staging.
And after a tag or release being made, a deployment to production can be automated.
With TBD you won't be able to have these feature preview environments.
Again, I'll make sure to read the sources you have provided to get a better insight on this.
I have not worked on any large projects, only projects with maximum 5 contributors, thus my experience is limited.
@@rafaeltab so now you are doing more work to figure out how to divide up the work between people so that they don't overlap. 😉
The approach that I describe doesn't care, and catches those times when people's work accidentally overlaps.
@@ContinuousDelivery The problem I have with it right now is that the main branch will either not always be ready for production, since it contains unfinished features, or you won't be able to adhere to the rule of 'at least one merge per day'
@@rafaeltab What do you mean by "not ready for production" and "not able to adhere to once per day"? Why would that be true and why wouldn't you be able to have those things?
You need some branches or else how do you do code reviews? Developers should never commit code to master without someone else reviewing the pull request.
Tests are absolutely run on every feature branch.
The point is that if you have a strong testing culture, code reviews don't need to be a first class citizen.
Integrate, tests green, ship it, refactor later.
If every time someone integrates, they break something that your automated test suite doesn't pick up on, then you have bigger problems than what branching strategy you use.
Code reviews are highly overrated. Have your team work as a real team via pair/mob programming and you won't need code reviewa and PRs. PRs were not meant to be used by teams of colocated, trusted collaborators.
@@KYAN1TE the PR review by a lead developer is to catch code quality for things that linters or sonarquebe don't can't catch.
@@KrisMeister Your job is to deliver value to customers/users/stakeholders. If you have sufficient automated testing, your feedback loop is much quicker than that of a "lead dev".
By all means this doesn't mean the "lead dev" can no longer do reviews, but it can occur post-integration rather than pre-integration which could lead to prolonging the life of a branch even further.
@@KYAN1TEI don't think we're going to convince each other. Different experiences probably.
Quite honestly all the discussion you have about what branching strategy to use I think is worthless without considering how you're doing your testing, where you're doing your testing, what environments you have to do that testing and how those environments are used and then eventually how you get to production and track bugs and fix them. In short you need to consider the whole deployment and testing process or the best branching strategy is really hard to pin down. Right now our whole problem is around the deployment pipeline and the automated testing and how to make sure that doesn't interfere with QA testing. In the project I'm currently working on, we are severely limited in the environments we can deploy to and how we can do our testing in these environments due to budgets or time constraints setting all these environments up. Branching and merging are not our problem, the testing and deployments have become the real issue.
Wow, this video really triggered my mental defense system :D I have to say that at a glance, I really don't like that idea, maybe trying it out would change my mind... BUT.
TLDR: How do you do reviews? What if I break something and push? How do you track bugs from production? How do you track changes related to Jira ticket?
First of all, I would hate to start every day with solving conflicts. It always feels like a waste of time. With feature branches, I have to do it once. And, only I have to solve confilcts with my version. With trunk develepment, I imagine that every morning, the whole team has do to the work, if someone pushed changes yesterday. I know that it would be a bit more smooth, but if I was "required" to push my changes to dev branch at the end of my day, I need to pull first, solve conflicts. Then I can push, hoping that noone pushed anything in the mean time. Then, tomorrow, I have to start by doing the same frikin thing.
I am aware that most conflicts are solved automatically with kDiff or something, but it still feels like a burden.
Second problem, what if I break something? What if I made all the unit tests pass, but broken something at the system test level? In my project system tests require creating an Azure VM with the whole system setup (we code an app that work inside bigger app like a plugin), it takes half an hour before the tests even start. So, if I push changes, everyone pulls them, now everyone has broken code. Who fixes it? Me? Should everyone just wait until I fix it or should they revert? How do I even know that I broke anything? What if it blocks their work? Feature branches give us isolation and defend us from that, especially with a setup that requires green build before merge.
Nothing stops me from deleting all code just for giggles. How do you do reviews without feature branches?
Third problem. If something breaks in production, how do you track down what broke it? How do you revert the change? With feature branches, you revert ONE merge commit. With trunk based development, do I need to look for all the commits I made that are mixed with commits of 10 other people? Seems like a nightmare.
Also, when do you deploy? At what is there a build with full suite of tests that if failed, blocks the process? If it failed, how do you track down what broke it and who should fix it?
Plenty of questions... Happy to discuss and learn!
So: I work under the model described above, and it is vanishingly rare to spend _any_ time solving conflicts. Pulling and pushing frequently (many times a day, not just at EOD) means two different pairs are rarely touching the same code at the same time.
What if we break something? We fix it. We have fast tests which cover as much as possible and which we run pre-push, but also slower tests that give us feedback more on a scale of an hour or two. That means sometimes people will pull broken code, but usually subtly and very specifically broken code which doesn't stop them from progressing. We have a sheriff - a rotating role to keep an eye on CI and address any broken builds, which usually means going back to the pair that broke it to work out the fastest fix (usually a revert, with a fixed re-apply following).
To continuously push, you need either continuous review (eg pair programming) or trust. If you don't have trust, then drop everything else, that's the single most important thing to build in any engineering team.
Every bug in production boils down to one commit. Reverting a large feature branch which contains any refactoring or reusable utilities is likely to be a merge nightmare: granular commits are much easier to revert. The trick is identifying what the bug is and how it's happening - which is kind of orthogonal to how you push your work.
In general: only deploy something which passes all the tests. That might mean, if you have a slow acceptance loop after your fast unit test loop, you probably want to mostly wait for the slow loop to conclude successfully before deploying.
There may be circumstances where it's pragmatic to circumvent slow tests to get a fixed build out faster, depending on your domain and its risk/opportunity profile.
@@TARJohnson1979 in my team we have an intern and a aspiring junior, who need some eyes on them, review and feature branches are great at that. They work in their own pace, we give them feedback, then merge.
Trust is one thing, but you still have tests... Don't you trust yourself and your colleagues? :P So in short, you have a person guarding order, we have automated blocks in your way to prevent you from messing up. I like that single commits are easier to revert, that's true, but i still wonder how do you link your code changes to a ticket in your work tracker. I guess you put ticket number in your commit message and then have something that easily finds proper commits.
How do you do working on two tickets at the same time? You just start coding next thing and just put different number in commit message?
I think i would really need to work in this manner to get a proper opinion. I would really like to try it.
@@Qrzychu92 So, what we do sounds like it's different from what you do along a whole bunch of axes. For example, we don't have a concept of work tracking. We have tickets, but that's there to spell out what we're trying to do, not as a running status update on what we're doing. The linking between a commit and its story is just a reference in the commit message, and moving from one piece of work to another is just picking up the next thing, no real overhead to it.
Trust is multi-dimensional. I trust my colleagues not to maliciously damage the codebase, for example. I also trust them to know what sort of testing is needed for a given piece of work, and - maybe most importantly - to know to reach out for assistance because they don't know what they need to know. I don't trust them to just get code right first try, because we all know that's not something people actually do.
That trust has been established through collaboration, though - it's not something we just assume is there.
As for interns / juniors, my experience is: pairing works really well for this, but isn't sufficient. Sometimes, you've just got to let the get into the weeds at their own pace. That's a context where working in isolation followed by a review and discussion makes a lot of sense. But that working in isolation then seeking review: that's not about how we develop software, it's about how we develop team members. It's a different activity.
"When do you deploy?" Ideally, as soon as the tests pass.
"If something breaks in production, how do you track down what broke it?" This is where good CD practice comes in. If the test pass, ship it. If it breaks, it's a small change to roll back or, preferably, roll forward.
"Nothing stops me from deleting all code just for giggles." You have that situation now.
"How do you do reviews without feature branches?" Pairing. If not, you have very short-lived feature branches and eat the waste of wait time for code review.
Reading your test environment situation, if I were on your team I would map the testing process including the work time and wait time for every step and re-engineer for faster feedback.
If the build is broken, the team stops and fixes it immediately.
"First of all, I would hate to start every day with solving conflicts." This is very confusing to me. Why would this be the case? You start off your day working from a new copy of origin master.
Conflicts are exceedingly rare. I only get them when I've held onto code for too long before pushing.
@@BryanFinster @Tom Johnson
So, in short, instead of branching and revies, best practice is to do pair programming, which makes sense. Never done that to a serious extent :)
As for deploying as soon as the tests pass, in my project tests take with the whole environment setup take up to 4-5 hours, which means that there would be a high chance of someone making new commit in the mean time. This is why I like the idea of a release branch - you push code to it, then the pipeline takes care of everything else, if tests pass of course, but you can run them on pull request, before merge, so the branch remians "clean" and working.
As for nothing stopping me from deleting the code - to merge to develop branch I need at least one approval from someone other than me and a passing build (on PRs to develop we run shorter suite, around 30 minutes).
Work tracking - well, our product has 24/7 hotline for customers, we have on-call duties and we need to track when and how we fixed things that came from the client, so PRs and "aggregation" of git blame is very helpfull. The most difficult thing lately when we moved from quarterly releases (yes, but we are making progress!) to CI/CD is to keep track which ticket was done/fixed in which version. We need to automate that.
Last thing, the conflicts. Yes, I overreacted :) even with branching I rarely get to solve conflicts by hand (kDiff is really good!), so you can ignore this point.
To sum up, the whole thing is much more than GitFlow vs trunk. It's completely different approach on so many levels - pair programming vs PRs and reviews, staying on course vs tracking progress, having develop branch in production vs having a release branch and distinct versions.
I need to take a deeper dive into this, maybe we will make some pilot sprints (do you still have sprints or kanban works better?), becase the more continuous is our work, the less I like the gitflow, but this is just the opposite of the spectrum.
How mission critical are your products? Do you feel like your methodology has impact on the stabillity?
We branch per ticket, they are often only for a day to around 3. You said that why branch if the work is so small, well it makes code reviewing and rollback so much easier as you don't need to work out what commits made up a task, it only costs you about 5 seconds to make a new branch. You can then keep that branch up to data by merging any changes from the origin master (or from other branches that are working in a similar area to you) into your working branch, once the developer is happy they can merge back into origin master or Pull Request depending on you company setup to be automatically deployed to the QA env's. I would say the flow you mentioned risks pushing to origin master to quickly and potentially committing breaking changes that you know arnt complete ( as one should also commit often ).
The first time I saw GitFlow, my reaction was: 'Guys, you cannot be serious! Why would you do such a complex thing that is not CI friendly?'.
Then I saw a lot of people praising it, and I though: 'OK, then it must be just me being blind, maybe they know how to practically run CI on zillions of branches'.
Dave, thank you for explaining me that I am not blind :).
My pleasure 😁
@@ContinuousDelivery When you see people attempting to do CI with gitflow and have zillions of branches being built - that's when you know CI has gone through what Alan Kay called "the great low pass filter of life" ;-)
@@dafyddrees2287 I love "the great low pass filter of life" 🤣
Interesting concepts. In the 1990s, using Perforce, we developed a 2-branch system for a large engineering system which was mission critical. We had about 60 developers at the time. A key assumtion of the system were "hotfixes" were absolutely banned: everything had to be a formal release. The developer is the least important actor in this business scenario, the customer is the most important. As such, when a developer hit 'submit' it was part of their professional responsibilty that they had integrated and tested that software against 'head'. We had a visible webpage that displayed broken 'submits' as they occurred.
A key part of this philosophy was an comprehensive software build system. Each developer could build the entire massive system from blank disk in minutes. We had automated testing. So expecting a developer to test their change was not overly onerous.
Having said this, there is very little about the Git environment that I like, compared with Perforce. Perforce, with it's reservation system, is better suited to serious endeavors with critical software.
Wonderful content. I think the same mostly. But when there is juniors on team sadly it might not be possible to go complete ci since junior dev codes although it is working it might need refactoring and review. And we are trying to do pair programming as well but there are some limitations for that like timezones etc. Outside of this I strongly agree on what you think.
Most of this I consider just contextless bias, but indeed forcing people to think about commits in the more incremental way (i.e. my commit cannot break the build) sounds pretty nice, I like it.
Interesting. What you call "contextless bias", I consider "common sense". I dismissed Gitflow as soon as I saw it because I thought about it for about 30 minutes and saw how unnecessarily complex it was and how it just compensates for issues elsewhere in the development process.
@@edwardcullen1739 I congratulate your brilliancy! What kind of programming (domain) you're doing?
How do you handle code reviews when everybody commits to master all the time? Don't you ever make pull requests?
Continuous code review using ensemble working is a good option. You cannot inspect quality in afterwards anyway, so the best way to make sure you ship working software is to review while writing it, using two or more brains at the same time.
Nah there are many testing tools that can be used also with branching and the whole branching> pipelining> CI/CD cycle only with one branch is not selling it to me
"Just keep a local copy of master and merge when you are done" is (manual) feature branching. With the added risk of lost work that you never pushed. Just like we developed softwares before we had VCS that was good at branching.
"Just keep a local copy of master and merge when you are done" does not mean "two weeks of work".
It means something like committing/pushing every hour or so. That means committing incomplete/unfinished code, that still has to work, but that remains unused. It requires a different way of working, thinking, designing, testing, building, communicating.
Exactly! IF work commits doesn't get pushed to origin - it's basically ... having a feature branch, but without fail-safe persistence to server :)
If all the commits are randomly mixed in a sequence for yet incomplete features that might take weeks or months to develop, how are you going to remove experimental features that don't make the cut if they are hundreds of commits spread over thousands of commits? how do you even make sure those removed features don't leave skeletons behind?
It's even worse. What if i have to fix a bug urgently on the current production version, but there are already committed half of next features that doesn't fit into the current version?
@@_b0h4z4rd7 branch off the version tag to do the bug fix. I have no answer to the problem raised by Mik Wind though.
You make a great argument if you have a live running product with only 1 release. If you have to maintain multiple releases, you need multiple branches.
There are other ways to handle this too. One approach is to keep one version, but that configures itself on start-up. HP LaserJet team, did this for all of their printer products, when adopting Continuous Delivery. The result was a very dramatic reduction in the cost of maintenance.
actually, master is just a pointer to a commit as well as others branches :)
Pedantic AF. I approve of this comment.
You're misunderstanding (wilfully or ignorantly) that there's two processes being discussed, that there's commits in both is neither here nor there. Wait until you get into the whole rebase vs merge argument that's gonna totally blow your minds...
@@marshalsea000 I meant there is no any entity like "branch" in git. it is just a pointer to a commit for our convenience of working on commits tree :) there is only a single tree in git
@@mikhailbo8589 There can actually be multiple separate "trees" in a repo. Not that it's common though.
I love the T-shirt choice for this video :D
The idea that we're testing an out of sync version by branching is a fallacy.
When pull requests are created they contain the latest develop code merged in. Thus they're accurate at that time.
If develop changes conflicts will be shown. Thus forcing you to resolve them.
When a feature branch is merged to develop or has direct commits, CI is fired off. Thus the develop branch is always tested with all changes together.
Nothing gets into main/master without being tested correctly.
And in addition, since you're using branches you have the flexibility to decide when to release changes and have options like parent feature branches.
With the CI approach you describe no code reviews are happening and you don't have the option to work in isolation. This is really bad for any significant application.
> With the CI approach you describe no code reviews are happening
Why do you think code review should be coupled to merging a pull request?
> and you don't have the option to work in isolation.
WON'T FIX. Working as intended.
@@DavidWickes
If you have a point just make it. I'm not here for your condescending open ended questions.
Great explanation. I looked at GitFlow once and decided I wasn’t ready for that sophistication. Now I see why it’s mutually exclusive to CD.
At 10:00 you described your work flow. I’m not sure if it was just for simplicity or if I’m missing the bigger picture here but shouldn’t someone have reviewed your code before you merged it into origin master?
>shouldn’t someone have reviewed your code before you merged it into origin master?
Yep. The guy sitting next to you in a pair or the guys sitting around you in a mob. Or, in today's pandemic terms, "sitting".
@@andrealaforgia So how does the person "sitting" next to me see the code that I'm writing? I mean I exchange snippets of code through Teams, often even as screenshots, but eventually I still have to share it somewhere so they can take a look at it. That place is usually a separate branch that does not mess with the production code. I see no desire to "pair program" on the branch that is the "correct version" of the code. There's a lot of talk in this channel about "features", but it is very rare for us to have "features" that take less than 1 day to develop, so having CI in this fashion makes no sense.
I think you never implemented a pipeline before 😂
I wonder what our model is.
We develop and test changes locally in a develop branch, and deploy that to a QA testing environment where it gets tested by a QA team.
Once we fix all the defects the QA team finds, we create a "release candidate" branch that we deploy to a user acceptance testing (UAT) environment where it gets tested by key users.
Development continues in the dev branch while UAT goes on.
We fix defects that users find during UAT in the RC branch, and immediately merge those changes back into the dev branch.
When UAT is over, we create a production branch and we deploy that to production where it gets used by all the users.
If there is any break fix work in production, it gets fixed in the production branch, and merged back into the RC branch, and further back into dev.
But we almost never do any break fix work in production. The problems in production have rarely been so bad that users can't wait until the next release for them them to get fixed.
Some interesting ideas and insight in the video, but too dogmatic and clickbaity which is unnecessary. It also sort of lacks context. Not every client/customer arrangement includes spec changing / tweaking / testing on a daily basis. Having sprints after which feedback is collected and addressed is a viable approach. You don't _need_ daily feedback / micromanagement. Having to implement tweaks based on feedback also doesn't mean everything you did before all of a sudden becomes invalid and gets chucked out the window. You simply improve things incrementally.
How would this work with code reviews? Often feature branches offer the benefit of code reviews as well. Was thinking that could possible by making release branches from master/main and reviewing then?
* Sorry I mean merging to a release branch so you see all the changes since the last production release together.
My preference is pair programming, so there is no separate "code review" step and so no need for PRs. You get a better code review than code-review alone and lots of other benefits with Pair programming.
@@ContinuousDelivery @Andrea Laforgia I'm repeating myself, from above, but it seems it's necessary: Code Reviews and Pair Programming are different things and not interchangeable. Working together with someone on the same code leads to a different perception of the result then for someone with fresh eyes. So even for teams in the same timezone code reviews should be done asynchronously. That is: asynchronous code reviews have a different set of advantages/disadvantages and are not just borne out of necessity. While I think pair programming is highly desirable it doesn't make code reviews expendable.
@@bitti1975 No one has ever said that, with pair programming, code reviews are expendables. Async code reviews are though. It all depends on what you mean by code reviews. If you mean PRs, yes, they can be removed with pair programming. Code review is a fundamental activity that you make *continuous* with pair programming. You shift left the code vetting, decision, agreements you would normally perform in a PR. The best way to review code is to make decisions whilst working on it, not after.
There is a mountain of evidence that proves that pair programming is highly effective, and mob programming is even better.
@@ContinuousDelivery what to do when pair programming isn't practical from an organisational standpoint. What would you suggest then?
@@andrealaforgia I explicitly specified code reviews as an asynchronous activity. So yes, even you seem to think they are expandable. Saying "The best way to review code is to make decisions whilst working on it, not after" is just a postulation, but at least it means you acknowledge that there is a difference. Some things are easily overlooked in the heat of the moment so while it is desirable to improve code as early as possible, some things can only be seen with a certain distance. I don't know why you have to reiterate that there is high value in pair programming though, which I agreed to anyway.
That's an extreme view of CI. There needs to be SOME delay, on a consumer system anyhow. You can't just publish every crap without review. But you CAN also publish your develop branch, if you are brave.
If you want your changes to work, just keep rebasing...
CD does not mean publish/release every commit!
It means publish the latest version/build that is regarded as "good" at your choosing (by your context/definition). As the saying goes "If you can't deploy right now, it isn't Continuous Delivery".
Rebasing does not work if everyone's changes lives in their own branch. You will end up with lots of changes that are unintegrated since there is nothing to rebase with.
We decided to skip develop and go directly to master literally in the previous sprint. After years of using develop to increase the confusion over the current state of the system, everyone finally seemed to have understood why it's a bit counterproductive in the long run. And yes we will be having issues merging those big a** features that span across multiple branches because we're yet to incorporate feature flags, but we will manage and I call it a win already.
Thanks for the content again Dave. Great to see an aposteriori confirmation that what we did was indeed correct. And the fact that my dev team watches and actually enjoys those, means you're really nailing it.
It's about fear. When you work with too large a team of inexperienced and unmotivated people, feature branches are a way to prevent their work from being merged back without a session of some kind of oversight committee. It's horrible, yes, but it kind of does serve a purpose in a peculiar manner.
Yeah but this idea that a team is a group of people where there are untrusted members supervised by trusted member is really bad. It encourages ivory towers. A team should be a group of people working together: get the senior one join the junior one so the latter soon become trusted. Teams are there to unite people, not to segregate them.
Yes, I understand that, my point is that it is a really bad response to the fear. It is a bit like being afraid of anything else, if you do less and less of the things that you are afraid of, your fear will only grow, and eventually you find yourself living in a cella, eating cold baked beans from the can with a silver-foil hat on your head.
Hiding from the fear is a poor response, instead you need to deal with it in some manner, with some care, the danger may be real, but hiding only makes things worse not better. I have seen many companies that can't release software at all, despite having lots of people employed to do so. This is a result of retreating from the fear.
The reality is, that if we want to create software in teams, then we must allow people to make changes. The way to make them careful and cautious in making the changes is to make the consequences clear to them. You don't do that, if you abdicate responsibility for the consequences to some small group of over-worked gatekeepers.
The date is very clear, moving more slowly like this results in lower, not high-quality software. (See the "Accelerate" book by Nichole Fosgren et al).
I disagree. Speeding up the feedback cycle is beneficial, I like that. What I disagree with is the idea that automated testing is the best form of feedback.
First off, in an ideal world, you should be able to run a significant portion of, if not all, of your tests on a local development machine. In some large projects, especially with many many microservices, this is difficult or impossible, and in those cases other solutions need be found.
But primarily, having feedback from other developers on your changes is more important, and in a CI setting you need extra tooling which in my experience is often quite fallable and confusing to be able to review the changes made for a single feature. Feature branches and pull requests give you a way to have feedback on your changes, and Draft pull requests are criminally underused.
In the end I think your arguments make sense on the assumption that CI/CD are the best way to do development, but I think that's a false assumption for many teams and projects.
@@andrealaforgia5066 Yes, contract testing is an important part of the solutuon for largescale projects with many microservices, and is part of what I meant in the umbrella of "other methods". In those cases end-to-end style integration tests would still be hard or impossible to run on a local machine, and that's what I meant. Large systems aren't an excuse to test less, they're a reason to test more.
Also I don't equate peer review with pull requests, but I actually think asynchronous peer review over pair or mob programming is important. Multi-dev programming is a useful tool people should use, but it puts all the people involved into a similar mental space while building the code, following a single chain of thought, just with more minds making it more robust.
Asynchronous code review, when done properly and not as a rubber stamp, can ensure that the changes make sense both in a how and a why perspective to someone who doesn't have the same thought process, and can act as a litmus test for how maintainable the code will be six months later when a change needs to be made and nobody remembers the original thought process.