I'd love for Dave to do a video where he is pair programming with someone and leading them through how to "don't branch" for the first time. All the theory is awesome, but for some people it doesn't click until they see a practical example.
That reminds me exactly of some of his earlier videos from years ago how he took somebody's (Java? I think) code and re-factored it into unit tests. Although, it was a retrospective, we don't ever see what the previous person who wrote the code was thinking, but pretty similar. One of my favorite video series.
The real challenge with these ideas is getting any team to agree to try them out. For some reason all the data and logic in the world often just isn't enough to pull people out of their comfort zone. Being wrong and doing things badly is easier than learning and changing.
Have you identified what your decision makers care about, then gathered evidence that what you want to try helps with those things? Ideally with evidence from a source that the decision maker trusts. If you want to sell something, the buyer needs to perceive that it will solve their problems.
@@jimmyhirr5773 That's a good point and makes a lot of sense to consider that, but I was mostly speaking from a lead engineer role trying to convince the other engineers. I've had little luck even getting my peers to want to try out any of these strategies, let alone getting to the next step of convincing management and leadership.
I’ve recently been coming to the conclusion that if you’re the lonely voice advocating for a lot of this stuff, at a certain point you just have to start doing stuff against any “rules” that might be in place. In my job, most managers around me are expressly (or tacitly) against pairing or automating manual testing processes. The thing I’m getting more comfortable with is you can just start doing those things (maybe you need at least one other person who is a fellow traveler) and not tell them. When you’re inevitably outperforming others and enjoying your job more, maybe (just maybe!) you can tell them what you’re doing. In other words: maybe stop asking for agency and just take it.
@@videotime706 Agreed. As long as there aren't any detrimental side effects and you're doing it for the good of the organization I'd follow the old advice of: "It's easier to ask forgiveness than it is to get permission"
@@videotime706How are you able to do pairing if your management is so against it? Do they not assign tasks to individual people? I would be especially curious on how you do this if you have a Daily Scrumtus Report Meeting where you must tell your immediate superior what you worked on the previous day.
Next episode for you - and I do mean this seriously: How saying "not all opinions are good" gets you branded as "not a team player" and the insecurities of leaders in our industry nowadays.
The insecurity is born from tech companies being driven by people who don't want to care at all about how software management is done. Instead they just copy paste from other companies.
I've seen this attitude a lot in the traditional machine building industry. It is not allowed to discuss options, any arguments will be ignored and the one highest up in the organisation chart picks the opinion that the rest will struggle with for the next couple months.
I suspect that a lot of vendor lock-in is influenced by vendor kickbacks. Seriously, there were free open source options, but nobody was looking to give up the free lunches at the Brazilian steakhouse. Good times.
I can buy that I'd want more than simply that statement though. Pointing out what is bad doesn't change that it is a known quantity. Trying to sell the art of the possible is much harder, especially if only pointing out what is, is bad, because it is known, and folks are comfortable with known. That said, it is not your fault, if you cannot convince folks not to drink poison.
Maybe it's just me, but the title doesn't seem to match up with the content of the video. The content is about how to identify bad ideas whereas the title implies that the content is about problems with Agile. The description also does not match the content of the video.
In my opinion, there are quite a few more things about Agile that one could be sceptical about, but bad ideas have nothing to do with the agile methodology. I actually read a book by one of the writers of that manifesto and what I thought most often is "What the ..." I don't think we need a new mechanism for sorting out bad ideas - things like brainstorming, where all ideas are for the moment treated as equal, are only one step - as soon as those results are processed, the quality of ideas will become more and more clear, no?
@@friedrichhofmann8111 There's no need for every team to start from 0, and not all ideas are equal. Some are supported by evidence. Everything in the video is related to fixing the nonsense I hear from the "let's just all get along" agile trainers.
In recent years I see that the principle of "standing on the shoulder of giants" is more and more replaced by "it's old, it's wrong. I know it better". In a not mature way,. Not only in the area of software engineering but more as a Zeitgeist thing.
I'd speculate that it's partly due to the internet becoming mainstream and providing everyone with easy access to so many different perspectives on the same subject. Individual authority (the "giants") has been reduced significantly, as now everyone can easily produce and distribute knowledge. And people can use that to offer different (immature) perspectives, creating pockets online of "The Right Way™", sometimes just for personal gain. And there exists no central authority reviewing the validity of all these perspectives.
@@BernardMcCarty Central is perhaps the wrong word. It can be a decentral authority, as long as there is a well established, recognized authority. The usecase for it would be to split perspectives based on objective rationalized arguments (science) from those purely based on subjectivity. Splitting the two would reduce the noise introduced by weak arguments and can keep our collective focus on ideas that are supported by strong arguments and build upon those exclusively. Those ideas are what I regard as "the shoulder of giants".
@@KiddyCut "Standing on the shoulders of giants" means to build up on the knowledge and achievements of the past to make the next step. While giants might refer to outstanding individuals, for me this can also be the sum of all the parts done by indivuals. This is one of the central principles of science. If you want to have more than a just an opinion on a topic you need to do your homework and study some stuff that others already did in that topic. E.g. in software engineering I see a current missunderstanding of the old waterfall like approach vs. agile methodologies like Scrum. Some are just ignoring all the knowledge built up before the time of the Agile Manifesto and consider it as "it's old, it's wrong. I know it better". And they then completly missing the point of e.g. what Scrum is about: small increments to get fast feedback from your customer if you meet his maybe changed requirements. Which was the main issue with 'non-agile' (i.e. waterfally like) methodologies in the 90s. Instead they also completly missunderstand things like the Agile Manifesto and do no upfront thinking anymore ("I just start to code and it will get fine eventually. Iteration!"), documentation is some autogenerated API description with nearly no context ("code is documentation"), and complement that with "I do not write unit tests, we do end-2-end tests!". And with all of this software quality gets lower and it takes even longer than before in situations where it is clear what is needed next to be done but it needs to be broken down to fit in sprints with iterations without any value as their is not customer or any other important stakeholder to check if you build the right thing. And don't get me wrong: I like Scrum as I like other methodologies. It is a great tool in my toolbox. But it is a rich toolbox filled with great stuff from decades of software engineering even from times before my birth. With a lot of people nowadays I have the feeling that their toolbox is very empty besides the Scrum-hammer and their end-2-end test screw driver. And that they are doomed to reinvent the wheel over and over again and do the same mistakes as some many others did before them.
The key step for achieving Continuous Delivery is the automation of the deployment pipeline beyond running unit tests. When the pipeline also automatically runs integration tests and a set of acceptance tests, and then deploys the software if all is well, that is when you start getting the benefits of this style of working. Most teams are scared to automate the final parts of the pipe line, and have some sort of manual QA holding things up. Those teams will never see the benefits of QA. And most will have an over-worked Architect or QA team struggling to keep up with the changes.
Precisely this. In these scenarios, QA are constantly chasing the recent changes for features and fixes to known bugs to make sure they do what they say they do, instead of being freed up to do the important work of finding the bugs that haven't been reported yet.
This comment (and this whole channel) exhibits the way too common tunnel vision that all software is equal. I'm not disagreeing that automatic deployment may be good. But very often it is not good or safe or legal. Think about ABS brakes, fly by wire, nuclear, medial devices, contractual constraints .... in fact... came to think of it any software if you think it from the users point of view. Me and rest of the population hate three things most: change, change and change. I don't want to use your crappy (or mine!) crappy software I want to get things done and unfortunately the software is the way to get things done. When it changes under me when I need to pay bills or are on a deadline it just makes me crazy mad. From users point of view the devil you know (todays crappy software) is often better than the new and improved crap.
@@Axel_Andersen There are few people who understand the different types of software better than this channel, and myself, having worked for 30 years in developing mainly firmware, but also other types of software.
Automation is not the "key" step. Its the Achilles heal. Everything he states is about bringing a complex system together. That doesnt mean that once you bring it together it needs to go to production. The "one click deploy" is the software version of the microwave. We are paid a lot, I dont understand why we are not treated as good as chefs. Get the managers out of the kitchen and you will have a better restaurant. Quality and speed have a healthy relationship called efficiency. Only automate to the last step but always have a man in the middle.
My current interpretation of why developers argue against things like TDD, Pair programming or CI is that those arguments are just a justifications. They aren't really trying to argue to work better. They are finding excuses, so they can stick with processes and tools that allow them to work in isolation, without having to engage in intense collaboration with others within their teams. For many of the devevelopers, they optimize their process for minimal interaction first and for efficient software development far second. It is rare to find developers who are willing to sacrifice their own comfort and mental energy, so they can achieve more efficient team-focused and long-term results.
You have hit on one of the great ironies of our industry. The stereotypical developer is a loner, geek, that spent their teens in darkened rooms, writing code; they are probably an introvert and socially awkward, if not somewhere on the Autism Spectrum; in short, they are far more comfortable working on their own. The kicker is of course, that successful, non-trivial software development requires, team-work and interacting with others. One of the best software development managers I've worked with believes that "Agile daily standups, requiring her team of socially awkward, introverted, young people to stand in a circle and talk to others about what they're doing is perhaps the cruelest thing you can do to them as people." We need to find other ways of working that the team finds more comfortable.
For some problems I have even found it effective to have 5 or so people working together on the same piece of code at the same time. It allows me to get the various engineers and scientists together and once while writing particularly difficult pieces of code so we can make sure that code does what it needs to do with everyone involved right there to ask questions. Trying to make all the requirements specific enough to completely code them up without needing those people present turned out to take more time than just having them coding together.
Pair programming as in collaborating with others, through PRs or when challenges are faced, that's fine. The idea of having someone else to program with before you write any line of code is just purely BS.
@@awmy3109 Even on this channel it was not promoted this way. It was done to bring new people to already existing codebase and work together to solve this new person's task.
Having read Dave's books and watching his videos, I can say from my first hand experience implementing these ideas have demonstrated significant improvements for the projects I'm involved with. It takes time for members to digest and implement but when you do you achieve process improvement.
I agree that for 80% of commits feature branches should be avoided. But I think there are times when making complex changes, a feature branch is necessary. I say this with caveats: a) merge/rebase from master into your feature branch regularly. b) if while working on your feature branch, you have a small isolated fix/refactor, then get that committed to master, and then merge/rebased into the feature branch.
Good point. It would be a judgement call when to create a feature branch. Like a surgeon deciding which procedure to use in a particular case. Unfortunately, too many mangers and architects aren't willing to let their developers make that call.
@@ForgottenKnight1 - I said it was unfortunate that managers are not willing to let their developers make that call. It would be better if managers would let their devs make that call.
If I am anything to go by, one reason people might shy away from CI/CD is the fear of failing quite publicly. A lot of us don't know it, but we were taught to be ashamed when we were wrong about something. While I wholeheartedly agree that failure is innovation's best friend, and will absolutely urge others around me to shake off the fear of failure, I find it quite hard to shake myself.
Companies usually fail with this one badly. Leaders from the highest up should lead the way and discuss publicly some of their failures and show that it is acceptable to fail. Then all you need is to keep it up, remind people occasionally and rest happens by itself over time.
The problem with tdd, is that inorder to have it, you need to have software designed and built from the start to support using or isolating any portion of the code without starting the whole thing , if you're dealing with legacy software code that doesn't allow injecting alternate code depenencies or software that relies on multi threads, it become much harder to do. Natrually you can try to refactor exiting code to enable tdd, but that means change existing code which would require the old process of manual testing, not to mention approval from manamgment who may not see the same things the same way you do, it's like a cycle you're stuck on.
Yes, retrofitting TDD is complex and disruptive, my advice is generally to defend existing code with Acceptance tests, and start doing all new code with TDD, refactoring to enable it, tactically, as needed.
@@ContinuousDelivery Can confirm, this works in practice! Have done this on an existing gigantic monolith project where, in the beginning, unit tests were dismissed as "not a valuable use of time"
I work with a lot of science and engineering code where we are building simulations for making medicine. I have been able to get branches down to about a week on average but I found when trying to shorten it from that it caused more problems. I think part of the issue is that many of the people doing development are not primarily developers. They are engineers and scientists that need to get a model built and they need experienced support and someone that can look over their code before it gets merged in. The other part is that when building a model it is often hard to determine what correct is. The branches tend to get more group discussion on if the approach being used to solve the problem is the right approach.
I love this. My biggest fight is agile teams equivocating way too much, especially with things like demanding users stories having hours estimates or pulling stories into a sprint before the existing stories have sufficient testing. I continually have stakeholders pushing these ideas and when I push back they throw the ‘dogmatic’ label at me.
Definitely agree with you on CI. I would love to remove feature branches! That said, i sit comfortably with feature branches for short amount of time (as you have also promoted). Also, an alternative is trial integration - whereby you get the CI server to pull feature branches into ephemeral integration branches and see what happens. It’s CI on the fly! Sometimes this is easier to get existing teams to do this as a stepping stone to full CI
Agree with you on the whole on TDD in terms of fast feedback. I think there’s times though where TDD ends up exploding your regression set whereas BDD achieves the same quick feedback with better coverage using tests that are executed higher up in the system. It can be less costly to develop and maintain. Similarly, when working in safety-critical, using equivalence class partitioning and then using constrained random testing will generally give you much better coverage than TDD.
The main reason why I like feature branches is so that we can release features in the order far they are ready, not the order that we started working on them.
Love the talk about the Baloney Detector. I watched the entire video and I found your connection to Agile weak and have experienced the opposite effect as the constant feedback Agile principles facilities promotes the strongest ideas. If you are experiencing bad ideas in your teams, I'm not sure that has anything to do with Agile. That would happen in Waterfall equally and Carl Sagan's detector will help on that.
Martin Fowler recently updated his article on continuous integration and statet that, basing on his knowledge, it's not suitable for projects with no fixed team assigned to it, e.g. open source projects. How do you feel about it?
Yes, I'd agree with that, though I usually say it the other way around - "Feature Branching and PRs were invented for Open Source projects, because you don't know, so can't trust the people doing the work. If your team works like that it has serious problems".
@@ContinuousDelivery - You are very fortunate if you can trust your team members so much that you don't review their code. That kind of trust is rare. The vast majority of teams use the trust but review approach.
@@deanschulze3129 You seem to want an argument, but I am not very interested in you straw-men. No I didn't say that there was no review, there are better ways to achieve a review than Pull Requests. You seem to assume that it is not possible for the approach that I recommend to work, yet it patently does work for lots of people, and what data that we have says that it works better than alternatives. I don't claim that any other way can't work, I only say that the approach that I recommend is widely used in successful teams and that the data says that this is the route to better software faster - based on the DORA metrics and data collection.
@@ContinuousDelivery which data do we have to say that pair programming is better in all team setups than code review process? Pair programming is hardly WIDELY used. I haven't worked in a single company that utilized pair programming as the main methodology for development. Every single team I have worked at so far (and that' easily more than 10 teams at this point) has used pull request review approach. Yes, this is anecdotal evidence, but I would expect that at least one team of the team I have worked with used pair programming if it is so widely used as you seem to suggest. Pair programming will not work in environments where number of junior developers outnumbers number of senior developer. For example one senior developer and 3 junior developers is a setup, I have worked in a few times already. With the pair programming approach without pull request reviews would mean that every single time there would be a pair of 2 junior developers and their work would not be reviewed by the senior developer before it is pushed to trunk. How can that be better a approach than having pull request reviews, where the senior developer can see all the changes before they get merged into the trunk and provide feedback? Similar kinds of team structures are very common. Having pair programming in these setups simply doesn't work. I know, because I'm reviewing the code of said junior developers and if those changes would be merged into trunk without my review, the quality of the codebase would degrade extremely quickly. It's definitely useful to have pair programming sessions with junior devs from time to time to teach them something etc. But doing it all the time is untenable in that kind of team setup.
I recently left a team because of the frustration of not being able to merge into `master`. Twice I had to spend two weeks of merging, catching up with other people, remerging, waiting for approval from the Architect, etc, and hoping to find a moment when nobody else was checking stuff in. At one point I got angry and demanded my changes got accepted without review and nobody merge anything into master until I got my changes merged into it. That was a 12-man team heavily using feature branches. I did stay until I felt I had done enough to fulfill my contract, then quit. I am all for committing directly into main (after some sanity checks & local unit tests). These complex acceptance procedures give a false sense of security, but in effect only raise frustration and stress levels.
In such an environment merges are done by backlog priority. Is my item more important for the business than yours ? Then, even if I finish after you, if youre code is not merged, I get to merge first because of that priority. In short, you apply a merging strategy based on the priority of the business.
@@ForgottenKnight1 Nah. I had the privilege of re-designing a bit on the core of the system that had major impact on the rest of the system. In order to fix design errors that kept the system from fulfilling its scalability requirements. So basically every other checkin by anyone else required me to update my branch -- which required new approval by the QA people, etc. Business-wise, my update was THE most important that was being done: the product was not viable without it.
@@TheEvertw What a nightmare! Congratulations on getting out of there in one piece! That team sounds like they're real good at engineering, engineering their own problems that is.
Sounds like a nightmare. In our team (about 35 developers) if some PR has any substantial changes that affects the whole project (therefore causing a merge hell every time the main branch is updated) it gets prioritised, approved and merged very quickly. Apart from that we are having a short lived feature branches that are getting reviewed and merged in a day or two typically, so not a major problem. Following this channel for a long time but honestly struggling to see how would CI work in our company, taking in consideration that about 80-90% of devs are juniors here (so their code needs vetting every time), and we work in a domain that requires thorough app testing (so manual QA process is compulsory), especially the accessibility testing which is impossible to do with any automated tools atm.
The main reason I'm not working with continuous integration right now is simple. My company has a culture of Cowboy Coding! :( They can't even conceive pair programming or continuous integration in the way that should be done. Everything is a mess of red tape over a golden coating of "we do agile". I'm the lone developer/maintainer of a framework and docker image that uses that framework. And this framework is used by every single customer integration this environment has. In short, I'm forbidden to die or quit the project. If at least the salary was compatible with such responsibility all would be fine.
You might be shocked to hear that a better salary doesn't fix any of the frustration you feel as a result of "red tape." This is my belief as part of my anecdote from working a period of time in which my employer offered to double my pay. I hope that the only thing stopping you from leaving is preservation of your professional reputation with the company, and even that is expendable if you believe you can help others in a better environment. Once the company can, they will treat you as expendable. Current job market is not very good if you are lacking YOE so most people recommend to not abandon your current job until it improves.
18:15 - A succsessful merge is just that. It does not validate that the final result satisfies business needs. Nor does trunk development or pair programming. What validates a solution are the tests run against that codebase, no matter what the branching/merging strategies are. Are the tests covering all business cases ? Are the tests failing ? If yes, you go and have a look because you broke something. Maybe the break is intentional (requirements changed) or not.
Exactly. Which is exactly why tools like Github, Gitlab etc have a feature to be able to run tests that combine feature branch changes with the trunk merged, to confirm, that merging does not break the build or any required feature. And then all the problems with merging potentially breaking the code magically disappear.
Short answer: it depends. Longer answer: it depends what kind of software you are making. Most software is a bag of features with shared data. The work I do is building organic models where the whole is essential. Things like optimisation or simulation models. I have never been able to convince your typical full-stack dev that "you can't test a leg or an arm, only the body". I haven't got the answer yet, but I'm still searching.
Excellent video! Getting in trouble for taking a different, simpler and ultimately wildly successful approach is the story of my career. Dogma is the enemy of innovation. One question: where does the concept that ideas are equal arise from agility? It isn't something I came across when I worked for an actual agile company.
In my experience people who think agile is bad are people who don't understand that the fundamental part of agile is to get fast feedback and make course corrections. I get people in interviews talking about "strict agile" as how many days a week stand up is done and nonsense like that. Agile is not defined by meetings.
I have a couple of things that I think I would add on top of this video which are perhaps only lightly visited in what I think is a very good video about all of this. The right thing today isn't the right thing tomorrow. There are many teams where I would advocate for a SCRUM based approach, because of my read of the team at that point in time and their understanding of WoW. But I may also join an existing and established SCRUM team and choose to advocate for moving to more of a Kanban/Continuous Flow model. Now, for the first team, they are likely in a very chaotic situation, morale is low, deadlines are unwieldy, quality is poor and it's complete chaos. For the second team, they have good flow, things are orderly and everything is going smoothly. The first team needs help in order to get a handle on things, have a structure that can do some heavy lifting and give them some room to relax, think and restore some zen. The second team, however, is interesting. Why move to Kanban/Continuous Flow? Was doing SCRUM in the first place wrong? So, the second question, no, doing SCRUM was not wrong, it was likely the right decision for the context they found themselves in when they adopted it. For the second question, moving to Kanban/Continuous Flow, for the right team, could bring about much greater velocity and significantly reduced lead times, equally, it might not in which case it shouldn't change. There is a progression here, and it's entirely possible that the first and second team are the same team who applied Kanban without need which led them to be in the first scenario (Anecdotally, this has been one of my experiences in my career). Doing the right thing for the wrong reasons is the wrong thing. This builds on the above. A team might see the second team above moving to Kanban and automatically think they should to. They may even get lucky, and it may well work, they might see some modest improvements. But, their improvements will be capped because they won't have the metrics or depth of understanding available to figure out what a next good step might be. They rely on luck to succeed rather than succeeding through carefully thought out plans where the odds of succeeding are skewed massively in favour of success. In all likelihood, a team applying this strategy will end up in the first scenario of complete chaos. So what I would suggest is; rigorously applying a framework without understanding why is a recipe that leads to failure and is dogmatic. Instead, take time to understand your own problems in your own environments, distinguish problem and symptom, and carefully consider whether what you think is the problem might actually be a symptom, something that you can put a plaster over but that is fundamentally not the core problem. Part of doing this, and part of consuming videos like this, I think, requires an understanding of certain things. Firstly, differentiate anecdotes from evidence. Secondly, stories and examples are only anecdotes, evidence is statistics. Thirdly, and finally, there is no way to shortcut the task of understanding your own environment, you can't outsource this, there is no silver bullet that will always work regardless of the context, you have to pay the price of learning your environment and context.
Do you consult for groups of developers? Wondering if there are any good studies on applying correct agile models to groups. I was reading a single study on the effect of pair programming for short amounts of time where in pairs that were either made up of heterogenous or homogenous (things like personalities matched, previous existing knowledge) people. There is no silver bullet to getting people to work well together, but perhaps there is a common pattern for finding groups that mesh well together. Do you have any anecdotes that suggest something like that? I feel like personality matches and social skills are overlooked. I personally thought (in University) that it would be odd to be co-workers with some of my peers because of a general lack of social skills. I admit that it was immature of me at the time to think this way though.
@@retagainez Technically I'm just a software engineering contractor, but I inevitably end up doing much more than writing code. Code is the simplest bit, ultimately. So, I can't reference any studies, I'm drawing purely on my own anecdotal experience across numerous teams of engineers and a raft of successes and mistakes as I've learnt to drop the dogma and pick up the circumstantial pragmatism. With regards to pair programming, well, I think I'll consider mobbing first, I find the most valuable and successful mob sessions to be when everyone is focused on their problem but brings about unique perspectives to throw at the problem and see what sticks and what doesn't. The things I've found that inhibit successful group sessions are where ego gets involved. Gut feels are useful to help give an initial idea of a direction, but they are purely there to give a starting point rather than be the final state of what should be. I've had incredibly productive pairing sessions with people who are very similar in temperament and personalities to myself, and also where we're very different. The scenario you describe around poor social skills, I can see how that might be tricky, it's very difficult to know how best to approach that kind of situation, it may be that for some people mobbing is better (where there are a whole bunch of you working on a problem), but for others pairing. It really depends on the specifics. I would say to try it and other things, see what does and doesn't work, and don't be afraid of failure with it either. All I'd suggest with mobbing is that mobbing does require structure, it requires someone who can tie it all together and keep focus and pull the team back on track when they get de-railed, that's a skill in and of itself. Also, trivial stuff I don't think should be mobbed, unless it serves the purpose of training up more junior folks. Mobbing is more expensive, so make it give the most value you can. Sorry, I don't think I've really offered much more beyond a couple of anecdotes which hardly constitutes evidence. For me it's gradually moving towards evidence by dint of me working with a wider variety of teams than a perm employee would, but it's still just a collection of anecdotes in the grand scheme of things. I have my current working theories about it all, but these will always inevitably change in the face of new data that help shape the body of evidence over my career.
@@azena. Well I appreciate the general observations you've made more than specific scenarios. Certainly a good read, thanks. It makes sense what you have to say about mob programming. I think it makes perfect sense that mob programming would be great for getting some people who struggle with people to add value in a cooperative setting. I haven't yet experienced any mob programming and the amount of pair programming I have has been limited even if it has been my favorite form of collaboration yet. On a side note, I envy you. I would definitely enjoy contracting, but I've yet to break into even the entry level market. Not that inexperience would be a barrier to contracting, but perhaps I just want a bit of reputation before I get into that.
I am in the process of starting a PMO team in my company to mirror the the checks and balances system that scrum proposed on a larger scale. It will be the home base for the Scrum masters, so that we have Product dept. an Engineering dept. and a Process Hygiene dept. Your explanation of quantifiable measurement helped me greatly. As. Former UX professional it has always been a challenge to give „design“ especially visual design quality a quantifiable metric that can be measured during development and is not based on the personal opinion of stakeholders. I wonder if it is possible to somehow adopt your „does it fit the rest of the solution“ tdd approach for the short term until I can transform the organization to focus on having a viable product discovery process that validates(!) design options before they are even considered of to be built by the engineering professionals.
When you said that we need free ourselves of the appeal to authority I believe that should also includes deferring to any 'experts'. 7 of 10 Dentists saying Agile is best is, is not a swaying argument.
Bingo. We need rigorous testing of the various ways of developing software to see which practices work, and which do not. No one has tested pair programming against solo programming. Controlling for all the variables would be challenging, but without such tests saying one practice is better than another is subjective. It's worth noting that agile consulting was born from a single failed project -- the C3 project at Chrysler in the late 1990s. But that team was the self-anointed best team in the history of software development so who are we to question them.
@@deanschulze3129 "No one has tested pair programming against solo programming" 🤔www.researchgate.net/publication/222408325_The_effectiveness_of_pair_programming_A_meta-analysis www.sciencedirect.com/science/article/abs/pii/S0950584905001412 link.springer.com/article/10.7603/s40601-013-0030-0
@@BernardMcCarty That test was done using college students, not senior developers so it's pretty much worthless. Also it was a one week project that was part of a course so it was very artificial. I've not seen a realistic test protocol that controls for all the variables of software development, let alone a realistic test.
@Continuous delivery Hi Dave! We all know you should refactor production code when the tests are passing, they're great. It's a good advice. what are you thinking of only refactoring TESTS when they're failing? After all, if a test is passing, and we refactor it, and it still passes, we don't know if be broke it or not - we didn't see it fail. So maybe in order to refactor a test, you should first break production, see the test red, you can refactor now, run it again, make sure it still fails, and then bring the production back, and the test now should pass. Thoughts?
My preference is to refactor the tests when the test is passing, then consciously change the code to make the test fail to confirm that the test is working. If you refactor the test while it is failing it is easier to get lost and end up in a mess.
You could also try out Mutation Testing. Same principle. Have green tests, mutate the production code and then see at least one test fail. Will find many sorts of problems in your production code and/or tests, like missing tests, badly chosen test data, flaws in your production code, etc.
@@birgitkratz904 thank you for a nice suggestion. I always get high scores in MT, and the mutants that live are often neutral (for example "array.length == 0" to "array.length
Sorry you felt that, I was thinking about the equivocation that I described in the episode, but on reflection, I think you may be right that I didn’t tie that idea in clearly enough.
Hi Dave, love your channel. Agree you’re not dogmatic but yes you have opinions - many I agree with. I do disagree with the fundamental statement that waterfall is bad for building software. I’m an agile guy - I like agile I do agile reasonably well. However, people do agile badly. Similarly for certain systems - especially in regulatory contexts, waterfall is a good methodology for building software but again - many people do waterfall extremely badly. There’s this idea that waterfall means developments become out of date too quickly. This isn’t true. When done well all it means is you have a (rapid) gated approach to dev. Sometimes agile is appropriate sometimes waterfall is appropriate. The key is to figure out when one is more appropriate than the other. Generally in the case where you need faster feedback from the customer, agile works better
Thanks for the videos, Dave and sorry for leaving the question here. Is it fair to compare software with buildings and say "Buildings have blueprints and that - to an extent - reflect the requirements. The blueprint is specifically used to sanity-check plans for modification (e.g. "can this column carry the load of a new wall?") Software should have some kind of a blueprint, too, that when you need to make changes to it, you can reflect on it." I can see a variety of methods around but perhaps I can't grasp the importance of one over others due to my lack of experience. Yes, I'm sure many would say tests are the best way to define the expected behaviour of a system but 1) tests can be incomplete as they often are, and 2) tests provide a huge and mostly disjoined corpus of code that doesn't "speak" to humans like plain English (or whatever) does. If anything, they don't have a flow to them like a piece of text; no start, no end, just disjointed paragraphs which hardly depict a shape that you can keep in your head. What is the recommended way of setting the requirements in stone (let's say in the absence of tests) so that future developers can reflect back on that, for example when it comes to refactoring the code?
I accept that CD done correctly offers superior speed and quality of software. However (attempting to add some nuance here) high quality manure can also be speedily delivered.. i.e to deliver the wrong high quality features or implementations at speed is not the goal. Its important that features are critiqued appropriately, not just the code but the problem they are trying to solve and the approach taken etc. The cost of not integrating changes regularly is understood. By the same token we shouldn't underestimate the cost of integrating code that was "wrong" in the sense that it is not solving a valid problem or the problem it was solving was framed incorrectly etc even though tests say it passes and all best practices have been followed and it was deployed within the hour. PR's and auto deployed PR branches (i.e with the merged changes) provide a good compromise by providing a space for feedback / critique / debating / consideration, as well as an isolated QA environment which can be useful to really consider the implications of a feature before its integrated. I appreciate CD practioners will solve this problem with feature flags but at that point the code is already integrated and depending on your architecture it may be a harder task to rip it out. So my view is that when done right, both approaches can be fruitful. However I am on a journey that is moving towards CD, because i beleive many changes do deserve quick integration especially where confidence is high. I also think there should be another solution to ensure features are discussed and that activity that would usually happen on a feature branch PR still takes place.
The Farley Shelf Principle: Does my new shelf fit into the space that it’s meant to fit into correctly? We’re not testing the length; we’re testing the fit!
For a mental model of branching I like to imagine I do all the work but have to work every week Monday: feature 1 Tuesday: feature 2 Wednesday: feature 3 Thursday: feature 4 Friday: feature 5
By being too equivocal, treating "all opinions as equally valid" some opinions aren't and we need to find ways to detect those opinions, and correct them. This is not really what agile says explicitly, but it is how many people approach the "self-organising" principle. That every can decide for themselves what to do and how to do it. I think that decision making should be team-scoped not at the level of individuals - collective decision making within a team. I probably didn't say this clearly enough in the video - sorry, it was what I had in mind.
@@ContinuousDelivery Ah I see, excellent point. Yes my initial thoughts from this video were that I need to be more forceful at work that waterfall (advance planning with long feedback cycles) simply *does not work* for effective software development. And be prepared to back that with the oodles of evidence available. The agile coach at my company is a great champion for things like equality, psychological safety, giving and receiving feedback. All good things. But we miss the one thing that makes all this effective: the ability to respond to change. And that is lost the moment you start planning and collecting feedback on long timescales.
@@ContinuousDelivery One more thought that could be a potential video topic: the delay in gathering customer feedback. My company does an okay job of making a release at least every couple of weeks or so. But most of our feature code is hidden behind feature flags that the customer might take 6+ months to turn on (we have a B2B model and partner with a large multinational enterprise). So we get very little *customer* feedback - instead, we rely on internal stakeholders who guess at what our customer wants. This is an example of fake agile IMO - a slightly subtle one because we are releasing somewhat frequently, so on paper it can look pretty good. What are your thoughts on this kind of situation?
Not sure if anyone's mentioned this, but the problems with feature branching seem similar to the problems with long-running database transactions. That is, big transactions lock access to the database, and can cause other transactions to fail and have to be retried. The example you gave with the two merges incrementing a number is a classic race condition.
Yes! They ARE THE SAME PROBLEM. It is all about concurrent change in information really. If you have copies of information in more than one place, and it is changing, then the information content will diverge, so which one is true? Transactions were one approach at limiting the impact of this problem by providing a mechanism to decide which version of the "truth" to choose, and by dividing up the work into units of change that need to be atomic - all work together or all fail together. I think that this is exactly the same problem and it is everywhere.
@@ContinuousDelivery how are they the same? That makes no sense. In a DB transaction either everything happens or nothing happens. On the other hand feature branching just gives you merging problems with your code, which is not the same as altering data.
@@comercial2819 both are examples of information changing in two or more places, it is possible that such changes may be mergeable, maybe I changed the account balance, and maybe you changed the account name. We could have merged those changes, but if they were in a transaction the second change would be rejected if we both opened the transactions at the same time. This is just version control at a different resolution of detail. Equally, if you FB we may be able to merge at the end, or we may not. One is PESSIMISTIC LOCKING (the Transaction) the other is OPTIMISTIC LOCKING (the version control System). Early VCSs did PESSIMISTIC LOCKING too. But it is all about information changing in two places concurrently and how we deal with the results, how do we pick the truth now that concept is blurred.
@@ContinuousDelivery ok in that sense it would depend on the lock you are using for your transaction, so indeed you could end up with the same race condition.
If there is a set of ideas that are good, and you stick to them due to experience and knowledge, it appears dogmatic to some. But that's mostly due to a lack of their experience. You don't have to jump from every bridge to know jumping from bridges is generally lethal. Agile in itself is a set of ideas that appear dogmatic to some. As everything things have to be applied reasonably and situationally. Those who approach things in a dogmatic way, aka without putting in thought as to why something is bad or good, do it badly. However, there are things that are tried and tested, and either do or do not work. Also there are absurd ideas that do not merit to be tested.
I would really like that my main issue was moving from feature branching to CI :-) Architect's proposal is usually his/her own enhanced version of git flow and developpes' proposal are long live developper branches. How can they work if someone else is continuously creating bugs in their code? Not a question of skills, more a question of mind-set IMHO.
Pair programming, apart from being an unpleasant experience for many people (this is purely subjective, just because it is pleasant for you, doesn't mean it is pleasant for everyone) is completely impossible practice in a team where you have much higher number of junior developers than senior developers (which is a very common setup of a team). In those scenarios it is impossible to have pure pair programming because by necessity this pairs together two junior developers that are usually equally clueless on how to write good software (through no fault of their own, they just haven't had yet the time to learn) and you usually argue against having pull request reviews (at least that's what I've seen so far) so that code would not get proper code review from an experienced developer. Even Google doesn't use full time code review practices and they use typical code review process that most companies have (most companies do not utilize pair programming). Pair programming without later code review is inherently unscalable in companies like Google, they simply need to have code review regardless of the fact that it was developed by two people looking at the code, because they could be modifying code that does not belong to their team. I don't know if the correct word for your advocating of pair programming is dogmatic. But I don't see any evidence of pair programming being better than a standard code review process in most projects. On the face of it it looks like an inherently slower approach, that only works in a very small team with equal distribution of senior and junior developers. But then senior developers rarely get paired together which means senior developers don't get proper code review from other senior developers. You have never presented any convincing data that pair programming is inherently better than a typical code review process that most companies use. Also, the fact that most companies do use typical code review process and not pair programming should tell you something about what most software engineers (including many very talented and experienced ones) think about how practical approach it is.
Another reason that pair programming is no substitute for code review is that in some organizations your code has to be reviewed by multiple developers. In some cases one of the reviewers has to be an architect.
One of the best episodes, if not the best in quite some time IMO. Not because I didn't like the others, but because it's tackling such an important and non-discussed topic: biases in our industry. But in the end, it's reality that matters and who delude themselves are simply more probable to fail
You're lucky, you've got a Product owner. I'd kill for a Product Owner to decide what the damn thing is meant to be and do. I'm just a bare-footed urchin happy to live in a pothole in the middle of t' street.
TDD is a good example of an iffy practice to make universal. It's typically slower and bad to use in speculative software. I'm usually not sure, when I start a new system, how it is going to come together. Creating test points for rev 1 just means I'm taking longer to learn what probably doesn't work. After I've built a working model and I know, more clearly, what a proper implementation looks like. I can then move to solidify it with revised code and then with tests. Developer built tests always come with the handicap that they are built with the same understanding, blind spots, and potential coupling in/with the code.
What you are describing sounds a lot like writing a Proof of Concept, or Spike in Scrum terminology. Another word for it is: Exploration Testing. You're just doing it manually instead of using a test framework to ask the questions. There is a lot of value to automate such tests, specially if the questions you're asking are about external services. Those tests turns into contracts, and can tell you if one of your assumption ever changes. In the end, that's still test driven developement. After all, it's not called unit test driven development.
Reads like a post from someone who has never practiced TDD to me. If your tests are tightly coupled to your implementation then something is wrong. You’re not just ensuring the implementation works but that your solution design is loosely coupled and you have separation of concern etc. It helps that when things do ‘come together’ you have clearly defined boundaries. You don’t need to know the overall design of the final solution to practice TDD, far from it. Try starting from the actual business/domain logic and work out from there as this is the most important part of the application.
@@leerothman2715 - Unit tests are coupled to the implementation at the function/method level. That's why they are unit tests instead of integration level tests.
Robert Martin has a demonstration of TDD which opened my eyes. I'd have to find it and coke back to the comment, and I'm on my phone now, but I'm pretty sure it's in part 5 of a talk where he's wearing a white shirt and has a white background, a long video, and the demonstration is somewhere after the middle of the video. I wish you good luck with this but if you find it you'll remember me
Well, obviously people need to repeat the same or similar mistakes over and over. An idea might seem good at first but the true costs and pitfalls manifest later. You can help and warn people but some will ignore it until they make the full experience themselves.
Some do, and it works great. I generally assume that people don't adopt CD as a result of lack of experience or ignorance of the approach, because it does work better, than the alternatives, and is easiest to adopt at the start of a new project.
If people are calling you dogmatic, it’s probably about how you’re saying things. We can’t look into your mind and somehow feel you’re actually capable of changing your mind. All we see is you speaking in absolutes and saying things like “x doesn’t work” when there’s clearly companies delivering software while doing x. Or little lies such as claiming and Google and Amazon do TDD, when they don’t. I work in AWS and have worked for 3 teams and collaborated with many others. We all laugh at the idea of prescribing how people should work. So, rather than dig your heels, listen.
This! All tools and processes have valid, nay optimal use cases. One has to always use their brain and adapt. But you see, you can’t sell common sense can you?
Sounds more like unnecessary churn to me because every project doesn't need a "committee of experts" to catch a mistake. And you shouldn't be having to make key design or functional decisions on a daily basis anyway. "Agile" means knowing what you are doing and knowing how to incorporate the right decision making and collaboration for each project, not following the same blueprint for everything. It shouldn't take a team of experts to catch that a shelf won't fit if the person designing the shelf knows what they are doing. But then again you can't properly build a shelf in a vacuum and you need information on the purpose of the shelf, where it is going to sit within a larger space, area available for the shelf in the space, what the shelf it going to be holding and so forth. Those things then become the key testing criteria and development criteria for the shelf. All of that shouldn't require daily reviews either, unless you collectively don't know what you are doing and are simply trying new things to figure out what you are doing, which is churn.
The problem is decisions. Somebody - the user, designer, product manager [spit] - has to make decisions and be held accountable for them. Agile doesn't enforce that and what's more gives them an opportunity to dodge. "Oh, that's bureaucracy, we need to move at web speed". "We're not doing BDUF, LOL".
Yes, but we work in a a more incremental way so that half-finished doesn't mean "low quality" or "not working" it just means half of what it takes to make the feature, but written to production quality and fully tested, as far as what is there goes.
@@ContinuousDelivery yes that makes sense, thank you. Maybe we are just dealing too much with legacy stuff which was not originally written with this in mind.
I am dogmatic: The only thing I know for sure is that I know nothing. All other is based on beliefs that my experiences can be generalized and other peoples experiences can be verified. To get anywhere one needs a strong mind and a stoic sense. Scientific method will set us free: Study - Plan - Act - repeat.
Toxic processes: "misused waterfall" as of projects longer than 2 months (the author of "waterfall" never intended it to be longer than 2 months) ... Dave probably did not know that fact. "corporate scaled Agile" as the blind scaling counteracts the goal of agility for teams in an "Agile" approach
In my opinion TDD concentrates too much on automatic testing and ignores other testing possibilities. Sure, automatic tests is very useful. But in some cases they are more easier written and maintained. Test easy to write for discrete entities. For continious values this is much more difficult to do. Especially if this values have random errors. Example of such values is measurements. For them it is useful in addition to automated tests write tools which works together with our program, receives data and displays that data in graphical way, so people can analyse the data.
Continuous values is a unique test case I haven't really seen it around much. You could do it up to a certain precision I guess, but if losing some of the precision is inexcusable, you definitely are going to have to write a more thoughtful test. Although, if you are comparing data in a graphical way aren't you tolerating some loss in precision one way or another? How precise are we talking when it comes to people analyzing data? It's like comparing where a web page's form element is in the correct spot to the exact pixel (and somebody missing if it shifted one pixel left or right), that's pretty precise.
@@retagainez One of tasks that I have is measurement to track association. I can estimate RMS for azimuth and distance errors for measurements. Also I extrapolate track and calculate "errors" for it (error covariance matrix). Than I can draw region in which track measurement must be with probability of 0,95. This region have form of ellipse. People can see if measurement is inside of error ellipse or not. Same result we can calculate using Mahalanobis distance.
@@retagainez When testing continuous values you can also write automated tests. For simplest cases you will know what values tested function must return for given argument. And we can compare actual results with expected values taking in account computation errors. If you don't know exact values, that function must return you can try other strategies. For example if function have inverse function you can try first apply to argument function, then apply inverse function and compare this with initial argument. Or you can calculate result using different approaches. There are no standard strategies that will work for every case. In worst case, if you cannot check correctness of results, you can save results for some test cases and check them to find out if you broke something when you changed the program. Using graphics you can check for example if functions have expected shape.
@@ЛукашевичАнатолий Right, it sounds like this depends more on generalized mathematical formulas or certain properties or axioms, which can be difficult to put into code let alone understand them.
This is why I don't understand Allen Holub's distaste for measurement. If you can measure something, you can try different ways of improving the thing you're measuring and compare how effective they are. Even when there is no "measurement" there are still measurements of a different kind. For example, if you don't use the DORA metrics to measure agility then you're probably measuring compliance to some authority's rules for a development methodology. Which is a great way to end up with cargo-cult Agile!
That was some very unexpected twist when you first brought up Bill Gates as an example for "Famous Person but Bad Idea" and then Steve Jobs as "Disliked Person with Good Ideas". Made me actually laugh. R.I.P Steve, though.
I admire a lot of what Steve Jobs did, but I separate it from who he was, the people close to him say he wasn't a very nice person. He was often actively cruel to people, including his own daughter. His Biography is interesting reading.
So... where exactly did agile go wrong? Where is the link to agile? I must have missed it - or is the title just click bait? I love the juxtaposition of your arguments with scientific principle, but where does agile come in? For me agile does not contradict CD or TDD at all - they can go hand in hand. #confused
The point I was trying to make, and probably didn't make strongly enough, was that the "self-organising" principle in agile thinking is often taken, wrongly, to mean that every opinion is equally valid, some opinions aren't! So we need better tools to decide which opinions are worth considering and which are not.
Having had to use some of their software, I wouldn't say Google or Microsoft's software is "good"... in fact on some occasions i would describe them as "bloody awful"... i think the term you are looking for is "profitable" which is not at all the same as "good".
I don't think a video like this really helps in convincing the people who call you dogmatic, because I feel like you're not even aware of what their complaints are. If someone calls you dogmatic, and you pull out a video where you torture the scientific method to arrive at the same exact conclusion you were already convinced was true before you look *more* dogmatic, not less. Since you want to see the holes in your argument: 1. Some ideas might be dumb, but that doesn't mean they don't have any area of applicability: The earth might not be flat, but on a small enough scale, that's a perfectly valid assumption. You might dislike waterfall development, but for example on certain environments it's pretty much *the only available option*, so trying Agile in those environments is just a distraction that wastes everyone's time. 2. Cherry-picked examples: You support your claims on how teams in successful companies have seen good results, without mentioning that other teams *inside* those same organizations have done *other* things and got good results as well. It's almost as if being a big company that can afford on paying extra for talent might have an impact on performance! Showing teams in successful companies finding good results with these methods is just not enough. You'd need to show for example that they don't get good results with other methods, or that they at least get better results, and then *maybe* you have something. Then you just need to make sure that you can replicate that across companies, across teams, across projects, ... Not doing that is just cherry-picking. 3. Discrediting preferences: The problem with the argument being made is that on this topic *preferences are actually important*. I can *measurably* see in my work that I'm more productive when I have scented candles on my office. I am more comfortable with them and can stay for longer on my desk actually working instead of being distracted. That is *my* preference, and that same candle might be an annoyance to other people, so it's perfectly reasonable to infer that preferences (even silly ones) have an actual impact on performance. Like in point 1, you're generalizing your conclusions without real evidence that they are general principles. 4. Aim to prove/try it: If you are being skeptical about your ideas, reports from other people saying they did not find the same results should be interesting and counted as evidence, not be dismissed with "you're doing it wrong". Doing that makes your argument on an unscientific one, since you're making it unfalsifiable.
Great comment, I also finished the video feeling it was a lot more dogmatic, not less. I'm still convinced a good Agile team will be more productive than a good TDD team on most kinds of projects, and I dont even like Agile that much.
I don't agree with having to commit (and by that i mean pushing upstream) at least once a day, or N times per day, it sounds somewhat ridiculous unless the code is ready to do something. For instance, I could commit "bool function foo() { return true; }". Is that ready for production? well if nothing else is using this method it is ready for production, but it makes no sense IMO. Making these kind of statements in such a generalist way can be dangerous. You should only push your changes once you are done with them (making atomic commits is something else). I pretty much agree with everything else though.
Feature branches vs. continuous integration seems like a false dichotomy to me. I don't see why a team couldn't streamline small, short features while continuously integrating. Oh, okay. It has to be integrated everyday. You probably should have started with that to reduce confusion.
@@ContinuousDelivery That's how all the teams I have worked in work ;o) A developer works on a feature, typically splits it into multiple smaller pull requests (not all at once of course, one after another) they get reviewed, they get merged, the work moves on. This is very typical workflow for many teams (including Google, although they do not use feature branches). I think whenever you mention feature branches you're talking about something that some teams USED to do, and probably some teams do to this date, but that is no longer the norm in software development, where there's a big feature that needs months to be worked on and nobody wants to put it into the final product before it is finished, so that feature remains on a feature branch until it is finished after few months and then it gets integrated into the main branch. I'm sure that there are some teams that do this very often, I'm sure that there are many teams that do this very occasionally, but most of the teams I worked with would not go this way and instead choose things like feature flags or outcommenting some part of the code or similar. Obviously those months long living feature branches are a bad idea and I think even those companies that do it that way feel like it's a bad idea and have some other reasons why they have decided to accept the downsides of that idea due to the benefits that it gives them (such as legal reasons etc.)
16:10 But why is integrating once a day CI, and, say once every 2 days, "by definition" not CI. Can't our team simply (re)define CI as requiring integrating once every 2 days? Maybe it makes more sense this way for our project. If yes, then, how much can you stretch this until it objectively stops being CI?
I sense that this approach to defending a methodology based on evidence is not sufficient to make it happen at a broad scale. The main issue with TDD in particular is that it requires people to practice TDD in order to realise its value, since it is very unintuitive as to why it works at first. The main obstacle we all face in various industries is the group inertia which is instilled by the older more senior engineers and/or the way it was done and to resist that is very hard. Like someone said in another comment, it makes you feel like "a black sheep non-team player of the crowd.
another problem with TDD is that many people unfortunately apply it incorrectly, which then leads to writing many brittle test cases that break when doing refactoring. People then get the feeling that TDD only slows them down and eventually stop caring about it. More needs to be done to educate people how to write correct tests, this is still not properly understood even if people start to jump on the TDD train.
Potentially true haha! BUT whilst the water fallers are still gathering requirements and hemorrhaging money....the agile company and the 💩💩is earning money OR they've quickly realised the path they are going down is wrong so they stop and change direction
A lot of Agile Practitioners treat Agile like a religion - when the development is a success it is because of agile. When it fails it is because the team didn’t do enough Agile.
Is it common expectation that the title of a video is not related to the content? Or did I miss a segment of the video - how is this video related to a fault in Agile? I understand contra-agile titles get clicks, and so I can respect that creators making good content have to play that game - but I’m not yet sure if we viewers should accept that emergent result from UA-cam. But here I am commenting - a behavior I worry will reinforce the use of clickbait titles. Do you have insights to share?
A non-dogmatic approach would be to say that you don't have any metrics showing that TDD or pair programming works better than anything else. They work well for your team because you hired developers who want to work that way. If TDD and pair programming worked better than the alternatives then after two decades there would be strong evidence for that. But there really aren't any metrics showing TDD and pair programming are superior. Dave is right when he says that arguments from authority aren't sound. So why does he use Carl Sagan as an authority? At 8:33 he says "...the necessity of having a model of why A works better than B that allows you to compare it to the alternatives." Models need to be tested, though. When did you test the different practices of software development against each other controlling for all the variables? It's ironic that an advocate for test driven development ignores the need to test the practice he advocates against alternatives. Dave cites SpaceX as an example of the success of TDD. But the flight control software for the space shuttle was written using waterfall. Why did he leave that out? These two examples show that very different practices can succeed in producing large, complex software systems. Neither is a reason why anyone should adopt those processes, however. This is a kind of argument from authority: 'SpaceX uses TDD so should you' www.fastcompany.com/28121/they-write-right-stuff
@@ContinuousDelivery - Ummm, that's what I wrote. SpaceX was written with TDD and the space shuttle was written with waterfall. Unless you're objecting that I left out Trunk Based Development. TDD and Trunk Based Development are two different things. You still need to come to grips with the fact that waterfall has produced a lot of very good software.
Also, no there is data in favour of both TDD and Pair Programming, more than there is against it, but the data is not good enough which is why I don't rely on it. Equally you can point to no data that saying NOT using TDD is better! So we are at an impasse, so Sagan's Baloney Detector is what we need.
@@ContinuousDelivery - And that lack of good data is the real problem. Given the importance of software development to the global economy we really need good data. Not data on student projects, but data on real projects done by experienced developers. What is needed are controlled experiments on real-world projects using professional developers. That would give us reliable data. No one is even talking about doing realistic experiments on this scale, however. But we need them.
@@deanschulze3129 There were small scale controlled experiments, and they showed advantage for TDD. So where is your data too suggest that your approach works better? I wish there was better data, but there is no data for either approach, what we do have great data in favour of is Continuous Integration, which tends to go hand in hand with TDD, CI builds better software faster than any alternative.
Your comments re branching are so odd to me. If you group changes in a system by the impact they will have you end up with a range from trivial to redesign. "No-branch" development will work at the trivial end of the scale but NOT at the redesign end. "No-branch" assumes a design is such that the code is partitioned in a way that changes are isolated within the design. At some point you will hit a changes that are problematic because of the design. In that situation why would you throw away the option to branch? That makes no sense at all, unless it's completely new project, that is surely worse than branches?
Well, in my experience anyone who pushes some sort of idea along the lines of: don’t do x do y is dogmatic. I mean this stance against branching is very silly. Torvalds made branching cheap for a very good reason, do we honestly believe we know better than him? I’ve also heard other people say: don’t use cherry-pick in your strategy. Say what? Again Torvalds implemented cherry-pick for a resson. Same with force push. Learn your tools well and use ALL the features if necessary!
"Don't hit yourself in the face, Learn new things instead" - dogmatic? I guess that you didn't watch the video to the end? One to the items in Carl Sagan's Baloney detector - the core of the episode, is don't take "argument from authority" seriously, Torvalds made branching easier because his problem was managing changes to one of the biggest and most important open source projects, that is NOT the same problem as working in a team on some software.
Where agile gets it wrong, especially scrum, is that there is no peer reviewed empirical evidence to support the claims of agile proponents. Kanban, pair programming, as 2 minor exceptions.
If I'm only allowed 1 day to test my changes and verify them before I have to merge them back to "master" in I would go mental in one week. I can conceive of any change of any substance that would take one day to define, code, test and verify. What kind of software does Dave think people are writing? CRUD webbapps? I write embedded system code and the environment with cross compilations, custom ARM hardware and testing that NEEDS human intervention as the system are moving motors and using sensors. There is no fully automated testing possible. This is where Dave "triggers" people, It seems to me the Dave assumes everyone is writing simple, low complex, no technical debt free beautiful code bases. When in reality most of us is sitting on a mountain of crap trying to keep it from falling apart for one more day. And only 1 day to merge creates a people problem, if you merge by 16:00 and the days ends by 17:00 and you thing breaks something, you now have a fresh shit-storm to deal with.
I don't get it. In the middle you say that if a tdd doesn't work for some teams it's because of lack of skill. And in the end you say that lack of skill isn't a good argument because it's not falsifiable. So why do you use it then?
In your system, doesn't it require almost all of the people on the team to be top developers who deeply understand the ideas you promote? Something you have at Google and Facebook perhaps but which is rarely a given in a regular company.
Maybe give us some examples where someone who commented on the channel, changed your mind on some fundamental topics ? I must say, one area where you helped to change my mind, Is on shorter lived branches. I am now a fan of feature flags for new features that are incomplete, where possible. I only got round to your way of thinking, after some rebase headaches. But "no branches"- let us say you are upgrading libraries, see a major set of issues, and then you need to store your work before going on leave. Where does this work get stored? Are you just going to merge the broken code? I feel branches are practical, and needs to be managed.
I'd always critically question if you really can't do such upgrades incrementally to prevent large pockets of broken code. However, I agree that there are some cases where one can't do them incrementally because they present a major set of issues that span more than one day of work. But honestly, how common are these? I've experienced these cases when upgrading application frameworks, e.g. from .NET Framework to .NET Core or going from Xamarin to MAUI. This kind of upgrade happens once in many years however. Whereas how often are you introducing new features that can be directly committed to main just fine? I don't know your work context but would it be weird to say that can be 90% of the time? So in such a situation I'd propose to be pragmatic and find a balance. Use CI (no branching) as the default way of working. And for those major refactor/upgrade undertakings, make a rare exception that is well-known to the team and create a separate short-lived branch. As you can read I agree with you that branches can be practical; they have their utility. It is the rigid black-and-white thinking of many people (doing either 100% CI/TBD or 100% feature branching) that blinds people to knowing when to choose the right tool for the job. Understanding trade-offs and being flexible can be difficult for many.
I think genuinely people misinterpret what he's up to. And the difference of dogmatism versus being opinionated is helpful. However, the title of the video does not match the content and continuous to give agile a bad name. In between its more about "branching" vs. "working on develop branch continuously" where he then again proves the point that being more agile (short cycles) is the better way in general. To me it looks he is not arguing against Agile at all. Try arguing for a stiff hip and I will let go of the idea of Agile. ;)
I didn't mean to mislead, but I think that I didn't make the point about Agile as strongly as I meant to. I was referring to the equivocation that all viewpoints/ideas are equally valid, and I don't believe that they are. So, what we need in that case is tools to help us to decide between the bad ideas and the better ideas.
This channel has become a series of bait videos, seems like you just want to make people mad so that they engage, commenting, UA-camrs reacting and all that Very sad to see this.
This is a kind of clickbait. Self organization is also about decision making. If the team allows that everyone has a veto on every idea, then the team has an impediment to solve. Nobody would argue that cars are bad because some are driving on the wrong side of the street.
Mike, I believe I see a fallacy in your reasoning. Take this statement: "no human can run 100 meters in under 12 seconds" is clearly wrong. What about this: "no human can run 100 meters in under 9,6 seconds"? fyi, that's also wrong - but only ONE person can do this. Your statements make perfect sense in a world (and teams) full of worldclass devs, in a company that has already completed the modern age mindset shift (ALL companies are IT companies) and where time and money are not scarce. Good luck telling your message to developers who write C in notepad and dont use version control.
I'd love for Dave to do a video where he is pair programming with someone and leading them through how to "don't branch" for the first time. All the theory is awesome, but for some people it doesn't click until they see a practical example.
Great suggestion! Getting from theory to practice is usually the bottleneck.
It's not even that he's not branching I think; it's that his branch is short lived, so the maintenance burden should fall to zero.
That reminds me exactly of some of his earlier videos from years ago how he took somebody's (Java? I think) code and re-factored it into unit tests. Although, it was a retrospective, we don't ever see what the previous person who wrote the code was thinking, but pretty similar.
One of my favorite video series.
What is the pair programming video?
The real challenge with these ideas is getting any team to agree to try them out. For some reason all the data and logic in the world often just isn't enough to pull people out of their comfort zone. Being wrong and doing things badly is easier than learning and changing.
Have you identified what your decision makers care about, then gathered evidence that what you want to try helps with those things? Ideally with evidence from a source that the decision maker trusts. If you want to sell something, the buyer needs to perceive that it will solve their problems.
@@jimmyhirr5773 That's a good point and makes a lot of sense to consider that, but I was mostly speaking from a lead engineer role trying to convince the other engineers. I've had little luck even getting my peers to want to try out any of these strategies, let alone getting to the next step of convincing management and leadership.
I’ve recently been coming to the conclusion that if you’re the lonely voice advocating for a lot of this stuff, at a certain point you just have to start doing stuff against any “rules” that might be in place. In my job, most managers around me are expressly (or tacitly) against pairing or automating manual testing processes. The thing I’m getting more comfortable with is you can just start doing those things (maybe you need at least one other person who is a fellow traveler) and not tell them. When you’re inevitably outperforming others and enjoying your job more, maybe (just maybe!) you can tell them what you’re doing.
In other words: maybe stop asking for agency and just take it.
@@videotime706 Agreed. As long as there aren't any detrimental side effects and you're doing it for the good of the organization I'd follow the old advice of: "It's easier to ask forgiveness than it is to get permission"
@@videotime706How are you able to do pairing if your management is so against it? Do they not assign tasks to individual people? I would be especially curious on how you do this if you have a Daily Scrumtus Report Meeting where you must tell your immediate superior what you worked on the previous day.
Next episode for you - and I do mean this seriously: How saying "not all opinions are good" gets you branded as "not a team player" and the insecurities of leaders in our industry nowadays.
The insecurity is born from tech companies being driven by people who don't want to care at all about how software management is done. Instead they just copy paste from other companies.
I've seen this attitude a lot in the traditional machine building industry. It is not allowed to discuss options, any arguments will be ignored and the one highest up in the organisation chart picks the opinion that the rest will struggle with for the next couple months.
I suspect that a lot of vendor lock-in is influenced by vendor kickbacks. Seriously, there were free open source options, but nobody was looking to give up the free lunches at the Brazilian steakhouse. Good times.
I can buy that I'd want more than simply that statement though. Pointing out what is bad doesn't change that it is a known quantity. Trying to sell the art of the possible is much harder, especially if only pointing out what is, is bad, because it is known, and folks are comfortable with known. That said, it is not your fault, if you cannot convince folks not to drink poison.
Maybe it's just me, but the title doesn't seem to match up with the content of the video.
The content is about how to identify bad ideas whereas the title implies that the content is about problems with Agile.
The description also does not match the content of the video.
Did you see 2:55 ?
@@jimmyhirr5773 I'm going to assume that you're joking. Since 5 seconds in a 20 minute video would clearly not justify the title.
Wow, I'm just about to comment the same thing. Also I did not see 2:55 has anything to do with Agile gets it wrong.
In my opinion, there are quite a few more things about Agile that one could be sceptical about, but bad ideas have nothing to do with the agile methodology. I actually read a book by one of the writers of that manifesto and what I thought most often is "What the ..." I don't think we need a new mechanism for sorting out bad ideas - things like brainstorming, where all ideas are for the moment treated as equal, are only one step - as soon as those results are processed, the quality of ideas will become more and more clear, no?
@@friedrichhofmann8111 There's no need for every team to start from 0, and not all ideas are equal. Some are supported by evidence. Everything in the video is related to fixing the nonsense I hear from the "let's just all get along" agile trainers.
In recent years I see that the principle of "standing on the shoulder of giants" is more and more replaced by "it's old, it's wrong. I know it better". In a not mature way,. Not only in the area of software engineering but more as a Zeitgeist thing.
could you elaborate?
I'd speculate that it's partly due to the internet becoming mainstream and providing everyone with easy access to so many different perspectives on the same subject. Individual authority (the "giants") has been reduced significantly, as now everyone can easily produce and distribute knowledge. And people can use that to offer different (immature) perspectives, creating pockets online of "The Right Way™", sometimes just for personal gain. And there exists no central authority reviewing the validity of all these perspectives.
@@MrSanchezHD What is the requirement for a central authority to review the validity of all these perspectives?
@@BernardMcCarty Central is perhaps the wrong word. It can be a decentral authority, as long as there is a well established, recognized authority.
The usecase for it would be to split perspectives based on objective rationalized arguments (science) from those purely based on subjectivity. Splitting the two would reduce the noise introduced by weak arguments and can keep our collective focus on ideas that are supported by strong arguments and build upon those exclusively. Those ideas are what I regard as "the shoulder of giants".
@@KiddyCut
"Standing on the shoulders of giants" means to build up on the knowledge and achievements of the past to make the next step. While giants might refer to outstanding individuals, for me this can also be the sum of all the parts done by indivuals.
This is one of the central principles of science. If you want to have more than a just an opinion on a topic you need to do your homework and study some stuff that others already did in that topic.
E.g. in software engineering I see a current missunderstanding of the old waterfall like approach vs. agile methodologies like Scrum. Some are just ignoring all the knowledge built up before the time of the Agile Manifesto and consider it as "it's old, it's wrong. I know it better". And they then completly missing the point of e.g. what Scrum is about: small increments to get fast feedback from your customer if you meet his maybe changed requirements. Which was the main issue with 'non-agile' (i.e. waterfally like) methodologies in the 90s. Instead they also completly missunderstand things like the Agile Manifesto and do no upfront thinking anymore ("I just start to code and it will get fine eventually. Iteration!"), documentation is some autogenerated API description with nearly no context ("code is documentation"), and complement that with "I do not write unit tests, we do end-2-end tests!".
And with all of this software quality gets lower and it takes even longer than before in situations where it is clear what is needed next to be done but it needs to be broken down to fit in sprints with iterations without any value as their is not customer or any other important stakeholder to check if you build the right thing.
And don't get me wrong: I like Scrum as I like other methodologies. It is a great tool in my toolbox. But it is a rich toolbox filled with great stuff from decades of software engineering even from times before my birth. With a lot of people nowadays I have the feeling that their toolbox is very empty besides the Scrum-hammer and their end-2-end test screw driver. And that they are doomed to reinvent the wheel over and over again and do the same mistakes as some many others did before them.
The key step for achieving Continuous Delivery is the automation of the deployment pipeline beyond running unit tests. When the pipeline also automatically runs integration tests and a set of acceptance tests, and then deploys the software if all is well, that is when you start getting the benefits of this style of working.
Most teams are scared to automate the final parts of the pipe line, and have some sort of manual QA holding things up. Those teams will never see the benefits of QA. And most will have an over-worked Architect or QA team struggling to keep up with the changes.
Precisely this. In these scenarios, QA are constantly chasing the recent changes for features and fixes to known bugs to make sure they do what they say they do, instead of being freed up to do the important work of finding the bugs that haven't been reported yet.
This comment (and this whole channel) exhibits the way too common tunnel vision that all software is equal. I'm not disagreeing that automatic deployment may be good. But very often it is not good or safe or legal. Think about ABS brakes, fly by wire, nuclear, medial devices, contractual constraints .... in fact... came to think of it any software if you think it from the users point of view. Me and rest of the population hate three things most: change, change and change. I don't want to use your crappy (or mine!) crappy software I want to get things done and unfortunately the software is the way to get things done. When it changes under me when I need to pay bills or are on a deadline it just makes me crazy mad. From users point of view the devil you know (todays crappy software) is often better than the new and improved crap.
@@Axel_Andersen There are few people who understand the different types of software better than this channel, and myself,
having worked for 30 years in developing mainly firmware, but also other types of software.
@@TheEvertw From the few videos I've watched this is not evident.
Automation is not the "key" step. Its the Achilles heal. Everything he states is about bringing a complex system together. That doesnt mean that once you bring it together it needs to go to production. The "one click deploy" is the software version of the microwave. We are paid a lot, I dont understand why we are not treated as good as chefs. Get the managers out of the kitchen and you will have a better restaurant. Quality and speed have a healthy relationship called efficiency. Only automate to the last step but always have a man in the middle.
My current interpretation of why developers argue against things like TDD, Pair programming or CI is that those arguments are just a justifications. They aren't really trying to argue to work better. They are finding excuses, so they can stick with processes and tools that allow them to work in isolation, without having to engage in intense collaboration with others within their teams. For many of the devevelopers, they optimize their process for minimal interaction first and for efficient software development far second.
It is rare to find developers who are willing to sacrifice their own comfort and mental energy, so they can achieve more efficient team-focused and long-term results.
You have hit on one of the great ironies of our industry. The stereotypical developer is a loner, geek, that spent their teens in darkened rooms, writing code; they are probably an introvert and socially awkward, if not somewhere on the Autism Spectrum; in short, they are far more comfortable working on their own. The kicker is of course, that successful, non-trivial software development requires, team-work and interacting with others.
One of the best software development managers I've worked with believes that "Agile daily standups, requiring her team of socially awkward, introverted, young people to stand in a circle and talk to others about what they're doing is perhaps the cruelest thing you can do to them as people." We need to find other ways of working that the team finds more comfortable.
For some problems I have even found it effective to have 5 or so people working together on the same piece of code at the same time. It allows me to get the various engineers and scientists together and once while writing particularly difficult pieces of code so we can make sure that code does what it needs to do with everyone involved right there to ask questions. Trying to make all the requirements specific enough to completely code them up without needing those people present turned out to take more time than just having them coding together.
Pair programming as in collaborating with others, through PRs or when challenges are faced, that's fine. The idea of having someone else to program with before you write any line of code is just purely BS.
@@awmy3109 Even on this channel it was not promoted this way. It was done to bring new people to already existing codebase and work together to solve this new person's task.
It is more because there is pressure to do things quick
Having read Dave's books and watching his videos, I can say from my first hand experience implementing these ideas have demonstrated significant improvements for the projects I'm involved with. It takes time for members to digest and implement but when you do you achieve process improvement.
I agree that for 80% of commits feature branches should be avoided. But I think there are times when making complex changes, a feature branch is necessary. I say this with caveats: a) merge/rebase from master into your feature branch regularly. b) if while working on your feature branch, you have a small isolated fix/refactor, then get that committed to master, and then merge/rebased into the feature branch.
Agreed. Knowing the utility of branches and being flexible in your way of working is key.
Good point. It would be a judgement call when to create a feature branch. Like a surgeon deciding which procedure to use in a particular case. Unfortunately, too many mangers and architects aren't willing to let their developers make that call.
@@deanschulze3129 Unfortunately ? That sounds like you like micromanagement.
@@ForgottenKnight1 - I said it was unfortunate that managers are not willing to let their developers make that call. It would be better if managers would let their devs make that call.
If I am anything to go by, one reason people might shy away from CI/CD is the fear of failing quite publicly. A lot of us don't know it, but we were taught to be ashamed when we were wrong about something. While I wholeheartedly agree that failure is innovation's best friend, and will absolutely urge others around me to shake off the fear of failure, I find it quite hard to shake myself.
Companies usually fail with this one badly. Leaders from the highest up should lead the way and discuss publicly some of their failures and show that it is acceptable to fail. Then all you need is to keep it up, remind people occasionally and rest happens by itself over time.
'Someone you despise...' And then shows a photo of the crappy Steve from Apple! 😂😂😂 Classic! 👌
The problem with tdd, is that inorder to have it, you need to have software designed and built from the start to support using or isolating any portion of the code without starting the whole thing , if you're dealing with legacy software code that doesn't allow injecting alternate code depenencies or software that relies on multi threads, it become much harder to do.
Natrually you can try to refactor exiting code to enable tdd, but that means change existing code which would require the old process of manual testing, not to mention approval from manamgment who may not see the same things the same way you do, it's like a cycle you're stuck on.
Yes, retrofitting TDD is complex and disruptive, my advice is generally to defend existing code with Acceptance tests, and start doing all new code with TDD, refactoring to enable it, tactically, as needed.
Try using the strangler fig pattern to gradually extract away the existing code when it needs modifying.
@@ContinuousDelivery Can confirm, this works in practice! Have done this on an existing gigantic monolith project where, in the beginning, unit tests were dismissed as "not a valuable use of time"
I work with a lot of science and engineering code where we are building simulations for making medicine. I have been able to get branches down to about a week on average but I found when trying to shorten it from that it caused more problems. I think part of the issue is that many of the people doing development are not primarily developers. They are engineers and scientists that need to get a model built and they need experienced support and someone that can look over their code before it gets merged in. The other part is that when building a model it is often hard to determine what correct is. The branches tend to get more group discussion on if the approach being used to solve the problem is the right approach.
I love this. My biggest fight is agile teams equivocating way too much, especially with things like demanding users stories having hours estimates or pulling stories into a sprint before the existing stories have sufficient testing. I continually have stakeholders pushing these ideas and when I push back they throw the ‘dogmatic’ label at me.
Definitely agree with you on CI. I would love to remove feature branches! That said, i sit comfortably with feature branches for short amount of time (as you have also promoted). Also, an alternative is trial integration - whereby you get the CI server to pull feature branches into ephemeral integration branches and see what happens. It’s CI on the fly! Sometimes this is easier to get existing teams to do this as a stepping stone to full CI
Agree with you on the whole on TDD in terms of fast feedback. I think there’s times though where TDD ends up exploding your regression set whereas BDD achieves the same quick feedback with better coverage using tests that are executed higher up in the system. It can be less costly to develop and maintain. Similarly, when working in safety-critical, using equivalence class partitioning and then using constrained random testing will generally give you much better coverage than TDD.
You need to get a T-Shirt Dave with a robotic dog blueprint on it, “Dog-matic 2000 (tm)”
Or some kind of cross between a dog and a washing machine.
That's a very good idea.
The main reason why I like feature branches is so that we can release features in the order far they are ready, not the order that we started working on them.
Love the talk about the Baloney Detector. I watched the entire video and I found your connection to Agile weak and have experienced the opposite effect as the constant feedback Agile principles facilities promotes the strongest ideas. If you are experiencing bad ideas in your teams, I'm not sure that has anything to do with Agile. That would happen in Waterfall equally and Carl Sagan's detector will help on that.
Martin Fowler recently updated his article on continuous integration and statet that, basing on his knowledge, it's not suitable for projects with no fixed team assigned to it, e.g. open source projects. How do you feel about it?
Yes, I'd agree with that, though I usually say it the other way around - "Feature Branching and PRs were invented for Open Source projects, because you don't know, so can't trust the people doing the work. If your team works like that it has serious problems".
@@ContinuousDelivery - You are very fortunate if you can trust your team members so much that you don't review their code. That kind of trust is rare.
The vast majority of teams use the trust but review approach.
@@deanschulze3129 You seem to want an argument, but I am not very interested in you straw-men. No I didn't say that there was no review, there are better ways to achieve a review than Pull Requests. You seem to assume that it is not possible for the approach that I recommend to work, yet it patently does work for lots of people, and what data that we have says that it works better than alternatives. I don't claim that any other way can't work, I only say that the approach that I recommend is widely used in successful teams and that the data says that this is the route to better software faster - based on the DORA metrics and data collection.
@@ContinuousDelivery which data do we have to say that pair programming is better in all team setups than code review process? Pair programming is hardly WIDELY used. I haven't worked in a single company that utilized pair programming as the main methodology for development. Every single team I have worked at so far (and that' easily more than 10 teams at this point) has used pull request review approach. Yes, this is anecdotal evidence, but I would expect that at least one team of the team I have worked with used pair programming if it is so widely used as you seem to suggest.
Pair programming will not work in environments where number of junior developers outnumbers number of senior developer. For example one senior developer and 3 junior developers is a setup, I have worked in a few times already. With the pair programming approach without pull request reviews would mean that every single time there would be a pair of 2 junior developers and their work would not be reviewed by the senior developer before it is pushed to trunk. How can that be better a approach than having pull request reviews, where the senior developer can see all the changes before they get merged into the trunk and provide feedback?
Similar kinds of team structures are very common. Having pair programming in these setups simply doesn't work. I know, because I'm reviewing the code of said junior developers and if those changes would be merged into trunk without my review, the quality of the codebase would degrade extremely quickly.
It's definitely useful to have pair programming sessions with junior devs from time to time to teach them something etc. But doing it all the time is untenable in that kind of team setup.
I recently left a team because of the frustration of not being able to merge into `master`. Twice I had to spend two weeks of merging, catching up with other people, remerging, waiting for approval from the Architect, etc, and hoping to find a moment when nobody else was checking stuff in. At one point I got angry and demanded my changes got accepted without review and nobody merge anything into master until I got my changes merged into it. That was a 12-man team heavily using feature branches. I did stay until I felt I had done enough to fulfill my contract, then quit.
I am all for committing directly into main (after some sanity checks & local unit tests). These complex acceptance procedures give a false sense of security, but in effect only raise frustration and stress levels.
In such an environment merges are done by backlog priority. Is my item more important for the business than yours ? Then, even if I finish after you, if youre code is not merged, I get to merge first because of that priority. In short, you apply a merging strategy based on the priority of the business.
@@ForgottenKnight1 Nah. I had the privilege of re-designing a bit on the core of the system that had major impact on the rest of the system. In order to fix design errors that kept the system from fulfilling its scalability requirements.
So basically every other checkin by anyone else required me to update my branch -- which required new approval by the QA people, etc.
Business-wise, my update was THE most important that was being done: the product was not viable without it.
@@TheEvertw What a nightmare! Congratulations on getting out of there in one piece! That team sounds like they're real good at engineering, engineering their own problems that is.
@@retagainez There was insufficient trust from Management, it wasn't a happy work environment. I was glad to leave, though I made excellent money.
Sounds like a nightmare. In our team (about 35 developers) if some PR has any substantial changes that affects the whole project (therefore causing a merge hell every time the main branch is updated) it gets prioritised, approved and merged very quickly.
Apart from that we are having a short lived feature branches that are getting reviewed and merged in a day or two typically, so not a major problem. Following this channel for a long time but honestly struggling to see how would CI work in our company, taking in consideration that about 80-90% of devs are juniors here (so their code needs vetting every time), and we work in a domain that requires thorough app testing (so manual QA process is compulsory), especially the accessibility testing which is impossible to do with any automated tools atm.
+1 for mentioning a book that should be part of the curriculum in every school… and its application to your reasoning.
The main reason I'm not working with continuous integration right now is simple. My company has a culture of Cowboy Coding! :( They can't even conceive pair programming or continuous integration in the way that should be done. Everything is a mess of red tape over a golden coating of "we do agile".
I'm the lone developer/maintainer of a framework and docker image that uses that framework. And this framework is used by every single customer integration this environment has. In short, I'm forbidden to die or quit the project. If at least the salary was compatible with such responsibility all would be fine.
You might be shocked to hear that a better salary doesn't fix any of the frustration you feel as a result of "red tape." This is my belief as part of my anecdote from working a period of time in which my employer offered to double my pay.
I hope that the only thing stopping you from leaving is preservation of your professional reputation with the company, and even that is expendable if you believe you can help others in a better environment. Once the company can, they will treat you as expendable.
Current job market is not very good if you are lacking YOE so most people recommend to not abandon your current job until it improves.
18:15 - A succsessful merge is just that. It does not validate that the final result satisfies business needs. Nor does trunk development or pair programming. What validates a solution are the tests run against that codebase, no matter what the branching/merging strategies are. Are the tests covering all business cases ? Are the tests failing ? If yes, you go and have a look because you broke something. Maybe the break is intentional (requirements changed) or not.
Exactly. Which is exactly why tools like Github, Gitlab etc have a feature to be able to run tests that combine feature branch changes with the trunk merged, to confirm, that merging does not break the build or any required feature. And then all the problems with merging potentially breaking the code magically disappear.
Short answer: it depends. Longer answer: it depends what kind of software you are making. Most software is a bag of features with shared data. The work I do is building organic models where the whole is essential. Things like optimisation or simulation models. I have never been able to convince your typical full-stack dev that "you can't test a leg or an arm, only the body". I haven't got the answer yet, but I'm still searching.
Thank you, this was very valuable to watch. Great content, as always.
Excellent video! Getting in trouble for taking a different, simpler and ultimately wildly successful approach is the story of my career. Dogma is the enemy of innovation. One question: where does the concept that ideas are equal arise from agility? It isn't something I came across when I worked for an actual agile company.
8:00 "...someone you despise..." - proceeds to show Steve Jobs picture on the background... XD Ruthless!
In my experience people who think agile is bad are people who don't understand that the fundamental part of agile is to get fast feedback and make course corrections.
I get people in interviews talking about "strict agile" as how many days a week stand up is done and nonsense like that. Agile is not defined by meetings.
I have a couple of things that I think I would add on top of this video which are perhaps only lightly visited in what I think is a very good video about all of this.
The right thing today isn't the right thing tomorrow. There are many teams where I would advocate for a SCRUM based approach, because of my read of the team at that point in time and their understanding of WoW. But I may also join an existing and established SCRUM team and choose to advocate for moving to more of a Kanban/Continuous Flow model. Now, for the first team, they are likely in a very chaotic situation, morale is low, deadlines are unwieldy, quality is poor and it's complete chaos. For the second team, they have good flow, things are orderly and everything is going smoothly. The first team needs help in order to get a handle on things, have a structure that can do some heavy lifting and give them some room to relax, think and restore some zen. The second team, however, is interesting. Why move to Kanban/Continuous Flow? Was doing SCRUM in the first place wrong? So, the second question, no, doing SCRUM was not wrong, it was likely the right decision for the context they found themselves in when they adopted it. For the second question, moving to Kanban/Continuous Flow, for the right team, could bring about much greater velocity and significantly reduced lead times, equally, it might not in which case it shouldn't change. There is a progression here, and it's entirely possible that the first and second team are the same team who applied Kanban without need which led them to be in the first scenario (Anecdotally, this has been one of my experiences in my career).
Doing the right thing for the wrong reasons is the wrong thing. This builds on the above. A team might see the second team above moving to Kanban and automatically think they should to. They may even get lucky, and it may well work, they might see some modest improvements. But, their improvements will be capped because they won't have the metrics or depth of understanding available to figure out what a next good step might be. They rely on luck to succeed rather than succeeding through carefully thought out plans where the odds of succeeding are skewed massively in favour of success. In all likelihood, a team applying this strategy will end up in the first scenario of complete chaos.
So what I would suggest is; rigorously applying a framework without understanding why is a recipe that leads to failure and is dogmatic. Instead, take time to understand your own problems in your own environments, distinguish problem and symptom, and carefully consider whether what you think is the problem might actually be a symptom, something that you can put a plaster over but that is fundamentally not the core problem.
Part of doing this, and part of consuming videos like this, I think, requires an understanding of certain things. Firstly, differentiate anecdotes from evidence. Secondly, stories and examples are only anecdotes, evidence is statistics. Thirdly, and finally, there is no way to shortcut the task of understanding your own environment, you can't outsource this, there is no silver bullet that will always work regardless of the context, you have to pay the price of learning your environment and context.
Do you consult for groups of developers?
Wondering if there are any good studies on applying correct agile models to groups. I was reading a single study on the effect of pair programming for short amounts of time where in pairs that were either made up of heterogenous or homogenous (things like personalities matched, previous existing knowledge) people. There is no silver bullet to getting people to work well together, but perhaps there is a common pattern for finding groups that mesh well together.
Do you have any anecdotes that suggest something like that? I feel like personality matches and social skills are overlooked. I personally thought (in University) that it would be odd to be co-workers with some of my peers because of a general lack of social skills. I admit that it was immature of me at the time to think this way though.
@@retagainez Technically I'm just a software engineering contractor, but I inevitably end up doing much more than writing code. Code is the simplest bit, ultimately.
So, I can't reference any studies, I'm drawing purely on my own anecdotal experience across numerous teams of engineers and a raft of successes and mistakes as I've learnt to drop the dogma and pick up the circumstantial pragmatism.
With regards to pair programming, well, I think I'll consider mobbing first, I find the most valuable and successful mob sessions to be when everyone is focused on their problem but brings about unique perspectives to throw at the problem and see what sticks and what doesn't. The things I've found that inhibit successful group sessions are where ego gets involved. Gut feels are useful to help give an initial idea of a direction, but they are purely there to give a starting point rather than be the final state of what should be.
I've had incredibly productive pairing sessions with people who are very similar in temperament and personalities to myself, and also where we're very different.
The scenario you describe around poor social skills, I can see how that might be tricky, it's very difficult to know how best to approach that kind of situation, it may be that for some people mobbing is better (where there are a whole bunch of you working on a problem), but for others pairing. It really depends on the specifics. I would say to try it and other things, see what does and doesn't work, and don't be afraid of failure with it either. All I'd suggest with mobbing is that mobbing does require structure, it requires someone who can tie it all together and keep focus and pull the team back on track when they get de-railed, that's a skill in and of itself. Also, trivial stuff I don't think should be mobbed, unless it serves the purpose of training up more junior folks. Mobbing is more expensive, so make it give the most value you can.
Sorry, I don't think I've really offered much more beyond a couple of anecdotes which hardly constitutes evidence. For me it's gradually moving towards evidence by dint of me working with a wider variety of teams than a perm employee would, but it's still just a collection of anecdotes in the grand scheme of things. I have my current working theories about it all, but these will always inevitably change in the face of new data that help shape the body of evidence over my career.
@@azena. Well I appreciate the general observations you've made more than specific scenarios. Certainly a good read, thanks.
It makes sense what you have to say about mob programming. I think it makes perfect sense that mob programming would be great for getting some people who struggle with people to add value in a cooperative setting. I haven't yet experienced any mob programming and the amount of pair programming I have has been limited even if it has been my favorite form of collaboration yet.
On a side note, I envy you. I would definitely enjoy contracting, but I've yet to break into even the entry level market. Not that inexperience would be a barrier to contracting, but perhaps I just want a bit of reputation before I get into that.
@18:18 … ahhh… did you actually test your double-merge example before waving it about as a failure? (Gotta love those post-increments!)
I am in the process of starting a PMO team in my company to mirror the the checks and balances system that scrum proposed on a larger scale. It will be the home base for the Scrum masters, so that we have Product dept. an Engineering dept. and a Process Hygiene dept. Your explanation of quantifiable measurement helped me greatly. As. Former UX professional it has always been a challenge to give „design“ especially visual design quality a quantifiable metric that can be measured during development and is not based on the personal opinion of stakeholders. I wonder if it is possible to somehow adopt your „does it fit the rest of the solution“ tdd approach for the short term until I can transform the organization to focus on having a viable product discovery process that validates(!) design options before they are even considered of to be built by the engineering professionals.
When you said that we need free ourselves of the appeal to authority I believe that should also includes deferring to any 'experts'. 7 of 10 Dentists saying Agile is best is, is not a swaying argument.
Bingo.
We need rigorous testing of the various ways of developing software to see which practices work, and which do not. No one has tested pair programming against solo programming. Controlling for all the variables would be challenging, but without such tests saying one practice is better than another is subjective.
It's worth noting that agile consulting was born from a single failed project -- the C3 project at Chrysler in the late 1990s. But that team was the self-anointed best team in the history of software development so who are we to question them.
@@deanschulze3129 "No one has tested pair programming against solo programming" 🤔www.researchgate.net/publication/222408325_The_effectiveness_of_pair_programming_A_meta-analysis
www.sciencedirect.com/science/article/abs/pii/S0950584905001412
link.springer.com/article/10.7603/s40601-013-0030-0
@@BernardMcCarty That test was done using college students, not senior developers so it's pretty much worthless. Also it was a one week project that was part of a course so it was very artificial.
I've not seen a realistic test protocol that controls for all the variables of software development, let alone a realistic test.
@Continuous delivery Hi Dave! We all know you should refactor production code when the tests are passing, they're great. It's a good advice. what are you thinking of only refactoring TESTS when they're failing? After all, if a test is passing, and we refactor it, and it still passes, we don't know if be broke it or not - we didn't see it fail. So maybe in order to refactor a test, you should first break production, see the test red, you can refactor now, run it again, make sure it still fails, and then bring the production back, and the test now should pass. Thoughts?
My preference is to refactor the tests when the test is passing, then consciously change the code to make the test fail to confirm that the test is working. If you refactor the test while it is failing it is easier to get lost and end up in a mess.
You could also try out Mutation Testing. Same principle. Have green tests, mutate the production code and then see at least one test fail. Will find many sorts of problems in your production code and/or tests, like missing tests, badly chosen test data, flaws in your production code, etc.
@@birgitkratz904 thank you for a nice suggestion. I always get high scores in MT, and the mutants that live are often neutral (for example "array.length == 0" to "array.length
I liked the content and I totally agree with your ideas, BUT why mention Agile in the title? This is misleading...
Sorry you felt that, I was thinking about the equivocation that I described in the episode, but on reflection, I think you may be right that I didn’t tie that idea in clearly enough.
Hi Dave, love your channel. Agree you’re not dogmatic but yes you have opinions - many I agree with. I do disagree with the fundamental statement that waterfall is bad for building software. I’m an agile guy - I like agile I do agile reasonably well. However, people do agile badly. Similarly for certain systems - especially in regulatory contexts, waterfall is a good methodology for building software but again - many people do waterfall extremely badly. There’s this idea that waterfall means developments become out of date too quickly. This isn’t true. When done well all it means is you have a (rapid) gated approach to dev. Sometimes agile is appropriate sometimes waterfall is appropriate. The key is to figure out when one is more appropriate than the other. Generally in the case where you need faster feedback from the customer, agile works better
Thanks for the videos, Dave and sorry for leaving the question here. Is it fair to compare software with buildings and say "Buildings have blueprints and that - to an extent - reflect the requirements. The blueprint is specifically used to sanity-check plans for modification (e.g. "can this column carry the load of a new wall?") Software should have some kind of a blueprint, too, that when you need to make changes to it, you can reflect on it."
I can see a variety of methods around but perhaps I can't grasp the importance of one over others due to my lack of experience. Yes, I'm sure many would say tests are the best way to define the expected behaviour of a system but 1) tests can be incomplete as they often are, and 2) tests provide a huge and mostly disjoined corpus of code that doesn't "speak" to humans like plain English (or whatever) does. If anything, they don't have a flow to them like a piece of text; no start, no end, just disjointed paragraphs which hardly depict a shape that you can keep in your head.
What is the recommended way of setting the requirements in stone (let's say in the absence of tests) so that future developers can reflect back on that, for example when it comes to refactoring the code?
I accept that CD done correctly offers superior speed and quality of software. However (attempting to add some nuance here) high quality manure can also be speedily delivered.. i.e to deliver the wrong high quality features or implementations at speed is not the goal. Its important that features are critiqued appropriately, not just the code but the problem they are trying to solve and the approach taken etc. The cost of not integrating changes regularly is understood. By the same token we shouldn't underestimate the cost of integrating code that was "wrong" in the sense that it is not solving a valid problem or the problem it was solving was framed incorrectly etc even though tests say it passes and all best practices have been followed and it was deployed within the hour. PR's and auto deployed PR branches (i.e with the merged changes) provide a good compromise by providing a space for feedback / critique / debating / consideration, as well as an isolated QA environment which can be useful to really consider the implications of a feature before its integrated. I appreciate CD practioners will solve this problem with feature flags but at that point the code is already integrated and depending on your architecture it may be a harder task to rip it out.
So my view is that when done right, both approaches can be fruitful. However I am on a journey that is moving towards CD, because i beleive many changes do deserve quick integration especially where confidence is high. I also think there should be another solution to ensure features are discussed and that activity that would usually happen on a feature branch PR still takes place.
The Farley Shelf Principle: Does my new shelf fit into the space that it’s meant to fit into correctly?
We’re not testing the length; we’re testing the fit!
For a mental model of branching I like to imagine I do all the work but have to work every week
Monday: feature 1
Tuesday: feature 2
Wednesday: feature 3
Thursday: feature 4
Friday: feature 5
Some good points Dave, but where does Agile get it wrong? 😄
By being too equivocal, treating "all opinions as equally valid" some opinions aren't and we need to find ways to detect those opinions, and correct them.
This is not really what agile says explicitly, but it is how many people approach the "self-organising" principle. That every can decide for themselves what to do and how to do it. I think that decision making should be team-scoped not at the level of individuals - collective decision making within a team. I probably didn't say this clearly enough in the video - sorry, it was what I had in mind.
@@ContinuousDelivery Ah I see, excellent point. Yes my initial thoughts from this video were that I need to be more forceful at work that waterfall (advance planning with long feedback cycles) simply *does not work* for effective software development. And be prepared to back that with the oodles of evidence available.
The agile coach at my company is a great champion for things like equality, psychological safety, giving and receiving feedback. All good things. But we miss the one thing that makes all this effective: the ability to respond to change. And that is lost the moment you start planning and collecting feedback on long timescales.
@@ContinuousDelivery One more thought that could be a potential video topic: the delay in gathering customer feedback. My company does an okay job of making a release at least every couple of weeks or so. But most of our feature code is hidden behind feature flags that the customer might take 6+ months to turn on (we have a B2B model and partner with a large multinational enterprise).
So we get very little *customer* feedback - instead, we rely on internal stakeholders who guess at what our customer wants.
This is an example of fake agile IMO - a slightly subtle one because we are releasing somewhat frequently, so on paper it can look pretty good.
What are your thoughts on this kind of situation?
Not sure if anyone's mentioned this, but the problems with feature branching seem similar to the problems with long-running database transactions. That is, big transactions lock access to the database, and can cause other transactions to fail and have to be retried. The example you gave with the two merges incrementing a number is a classic race condition.
Yes! They ARE THE SAME PROBLEM. It is all about concurrent change in information really. If you have copies of information in more than one place, and it is changing, then the information content will diverge, so which one is true? Transactions were one approach at limiting the impact of this problem by providing a mechanism to decide which version of the "truth" to choose, and by dividing up the work into units of change that need to be atomic - all work together or all fail together.
I think that this is exactly the same problem and it is everywhere.
@@ContinuousDelivery how are they the same? That makes no sense. In a DB transaction either everything happens or nothing happens. On the other hand feature branching just gives you merging problems with your code, which is not the same as altering data.
@@comercial2819 both are examples of information changing in two or more places, it is possible that such changes may be mergeable, maybe I changed the account balance, and maybe you changed the account name. We could have merged those changes, but if they were in a transaction the second change would be rejected if we both opened the transactions at the same time. This is just version control at a different resolution of detail. Equally, if you FB we may be able to merge at the end, or we may not. One is PESSIMISTIC LOCKING (the Transaction) the other is OPTIMISTIC LOCKING (the version control System). Early VCSs did PESSIMISTIC LOCKING too. But it is all about information changing in two places concurrently and how we deal with the results, how do we pick the truth now that concept is blurred.
@@ContinuousDelivery ok in that sense it would depend on the lock you are using for your transaction, so indeed you could end up with the same race condition.
If there is a set of ideas that are good, and you stick to them due to experience and knowledge, it appears dogmatic to some. But that's mostly due to a lack of their experience. You don't have to jump from every bridge to know jumping from bridges is generally lethal. Agile in itself is a set of ideas that appear dogmatic to some. As everything things have to be applied reasonably and situationally.
Those who approach things in a dogmatic way, aka without putting in thought as to why something is bad or good, do it badly. However, there are things that are tried and tested, and either do or do not work. Also there are absurd ideas that do not merit to be tested.
I would really like that my main issue was moving from feature branching to CI :-)
Architect's proposal is usually his/her own enhanced version of git flow and developpes' proposal are long live developper branches. How can they work if someone else is continuously creating bugs in their code?
Not a question of skills, more a question of mind-set IMHO.
9:59 "Hell is Other People" - Dave Farley 😂
like the Tshirt
Pair programming, apart from being an unpleasant experience for many people (this is purely subjective, just because it is pleasant for you, doesn't mean it is pleasant for everyone) is completely impossible practice in a team where you have much higher number of junior developers than senior developers (which is a very common setup of a team). In those scenarios it is impossible to have pure pair programming because by necessity this pairs together two junior developers that are usually equally clueless on how to write good software (through no fault of their own, they just haven't had yet the time to learn) and you usually argue against having pull request reviews (at least that's what I've seen so far) so that code would not get proper code review from an experienced developer.
Even Google doesn't use full time code review practices and they use typical code review process that most companies have (most companies do not utilize pair programming). Pair programming without later code review is inherently unscalable in companies like Google, they simply need to have code review regardless of the fact that it was developed by two people looking at the code, because they could be modifying code that does not belong to their team.
I don't know if the correct word for your advocating of pair programming is dogmatic. But I don't see any evidence of pair programming being better than a standard code review process in most projects. On the face of it it looks like an inherently slower approach, that only works in a very small team with equal distribution of senior and junior developers. But then senior developers rarely get paired together which means senior developers don't get proper code review from other senior developers. You have never presented any convincing data that pair programming is inherently better than a typical code review process that most companies use. Also, the fact that most companies do use typical code review process and not pair programming should tell you something about what most software engineers (including many very talented and experienced ones) think about how practical approach it is.
Another reason that pair programming is no substitute for code review is that in some organizations your code has to be reviewed by multiple developers. In some cases one of the reviewers has to be an architect.
One of the best episodes, if not the best in quite some time IMO. Not because I didn't like the others, but because it's tackling such an important and non-discussed topic: biases in our industry. But in the end, it's reality that matters and who delude themselves are simply more probable to fail
My Product Owner wants to know how long it'll take to make the Earth flat.
You're lucky, you've got a Product owner. I'd kill for a Product Owner to decide what the damn thing is meant to be and do. I'm just a bare-footed urchin happy to live in a pothole in the middle of t' street.
TDD is a good example of an iffy practice to make universal. It's typically slower and bad to use in speculative software.
I'm usually not sure, when I start a new system, how it is going to come together. Creating test points for rev 1 just means I'm taking longer to learn what probably doesn't work.
After I've built a working model and I know, more clearly, what a proper implementation looks like. I can then move to solidify it with revised code and then with tests.
Developer built tests always come with the handicap that they are built with the same understanding, blind spots, and potential coupling in/with the code.
What you are describing sounds a lot like writing a Proof of Concept, or Spike in Scrum terminology. Another word for it is: Exploration Testing. You're just doing it manually instead of using a test framework to ask the questions. There is a lot of value to automate such tests, specially if the questions you're asking are about external services. Those tests turns into contracts, and can tell you if one of your assumption ever changes. In the end, that's still test driven developement. After all, it's not called unit test driven development.
Reads like a post from someone who has never practiced TDD to me. If your tests are tightly coupled to your implementation then something is wrong. You’re not just ensuring the implementation works but that your solution design is loosely coupled and you have separation of concern etc. It helps that when things do ‘come together’ you have clearly defined boundaries. You don’t need to know the overall design of the final solution to practice TDD, far from it. Try starting from the actual business/domain logic and work out from there as this is the most important part of the application.
@@leerothman2715 - Unit tests are coupled to the implementation at the function/method level. That's why they are unit tests instead of integration level tests.
TDD != Unit testing
Robert Martin has a demonstration of TDD which opened my eyes. I'd have to find it and coke back to the comment, and I'm on my phone now, but I'm pretty sure it's in part 5 of a talk where he's wearing a white shirt and has a white background, a long video, and the demonstration is somewhere after the middle of the video. I wish you good luck with this but if you find it you'll remember me
Love your HGTG T-shirt!
Well, obviously people need to repeat the same or similar mistakes over and over. An idea might seem good at first but the true costs and pitfalls manifest later. You can help and warn people but some will ignore it until they make the full experience themselves.
Some things are just objectively better in measurable ways. That's not dogma. That's facts.
Out of curiosity is there any article describing Space X using TDD?
Why do smaller companies are not implementing continuous delivery (trunk-based development) instead of feature branching (gitflow)?
Some do, and it works great. I generally assume that people don't adopt CD as a result of lack of experience or ignorance of the approach, because it does work better, than the alternatives, and is easiest to adopt at the start of a new project.
If people are calling you dogmatic, it’s probably about how you’re saying things. We can’t look into your mind and somehow feel you’re actually capable of changing your mind. All we see is you speaking in absolutes and saying things like “x doesn’t work” when there’s clearly companies delivering software while doing x. Or little lies such as claiming and Google and Amazon do TDD, when they don’t. I work in AWS and have worked for 3 teams and collaborated with many others. We all laugh at the idea of prescribing how people should work.
So, rather than dig your heels, listen.
This! All tools and processes have valid, nay optimal use cases. One has to always use their brain and adapt. But you see, you can’t sell common sense can you?
Sounds more like unnecessary churn to me because every project doesn't need a "committee of experts" to catch a mistake. And you shouldn't be having to make key design or functional decisions on a daily basis anyway. "Agile" means knowing what you are doing and knowing how to incorporate the right decision making and collaboration for each project, not following the same blueprint for everything. It shouldn't take a team of experts to catch that a shelf won't fit if the person designing the shelf knows what they are doing. But then again you can't properly build a shelf in a vacuum and you need information on the purpose of the shelf, where it is going to sit within a larger space, area available for the shelf in the space, what the shelf it going to be holding and so forth. Those things then become the key testing criteria and development criteria for the shelf. All of that shouldn't require daily reviews either, unless you collectively don't know what you are doing and are simply trying new things to figure out what you are doing, which is churn.
The problem is decisions. Somebody - the user, designer, product manager [spit] - has to make decisions and be held accountable for them. Agile doesn't enforce that and what's more gives them an opportunity to dodge. "Oh, that's bureaucracy, we need to move at web speed". "We're not doing BDUF, LOL".
Thanks for the video =)
I think you meant to write ++retVal. In your example, the merged result actually returns the correct number :)
One thing I don’t understand is how can you do something which takes more than one day to implement? Do you merge your half finished code?
Yes, but we work in a a more incremental way so that half-finished doesn't mean "low quality" or "not working" it just means half of what it takes to make the feature, but written to production quality and fully tested, as far as what is there goes.
@@ContinuousDelivery yes that makes sense, thank you. Maybe we are just dealing too much with legacy stuff which was not originally written with this in mind.
@@mudi2000a quite possibly, this way of working does rather depend on you having automated test coverage that you can trust, and rely on.
Also! Feature flags are awesome for this 🙂
Awesome 👍
I am dogmatic:
The only thing I know for sure is that I know nothing.
All other is based on beliefs that my experiences can be generalized and
other peoples experiences can be verified.
To get anywhere one needs a strong mind and a stoic sense.
Scientific method will set us free:
Study - Plan - Act - repeat.
Toxic processes:
"misused waterfall" as of projects longer than 2 months (the author of "waterfall" never intended it to be longer than 2 months) ... Dave probably did not know that fact.
"corporate scaled Agile" as the blind scaling counteracts the goal of agility for teams in an "Agile" approach
In my opinion TDD concentrates too much on automatic testing and ignores other testing possibilities. Sure, automatic tests is very useful. But in some cases they are more easier written and maintained. Test easy to write for discrete entities. For continious values this is much more difficult to do. Especially if this values have random errors. Example of such values is measurements. For them it is useful in addition to automated tests write tools which works together with our program, receives data and displays that data in graphical way, so people can analyse the data.
Continuous values is a unique test case I haven't really seen it around much. You could do it up to a certain precision I guess, but if losing some of the precision is inexcusable, you definitely are going to have to write a more thoughtful test.
Although, if you are comparing data in a graphical way aren't you tolerating some loss in precision one way or another? How precise are we talking when it comes to people analyzing data? It's like comparing where a web page's form element is in the correct spot to the exact pixel (and somebody missing if it shifted one pixel left or right), that's pretty precise.
@@retagainez One of tasks that I have is measurement to track association. I can estimate RMS for azimuth and distance errors for measurements. Also I extrapolate track and calculate "errors" for it (error covariance matrix). Than I can draw region in which track measurement must be with probability of 0,95. This region have form of ellipse. People can see if measurement is inside of error ellipse or not. Same result we can calculate using Mahalanobis distance.
@@retagainez When testing continuous values you can also write automated tests. For simplest cases you will know what values tested function must return for given argument. And we can compare actual results with expected values taking in account computation errors.
If you don't know exact values, that function must return you can try other strategies. For example if function have inverse function you can try first apply to argument function, then apply inverse function and compare this with initial argument. Or you can calculate result using different approaches. There are no standard strategies that will work for every case. In worst case, if you cannot check correctness of results, you can save results for some test cases and check them to find out if you broke something when you changed the program.
Using graphics you can check for example if functions have expected shape.
@@ЛукашевичАнатолий Right, it sounds like this depends more on generalized mathematical formulas or certain properties or axioms, which can be difficult to put into code let alone understand them.
im still in ptsd recovery from a traumatic agile experience
Teams require human interaction. That in and of itself is a bit of a roadblock these days. So is being rational.
This is why I don't understand Allen Holub's distaste for measurement. If you can measure something, you can try different ways of improving the thing you're measuring and compare how effective they are. Even when there is no "measurement" there are still measurements of a different kind. For example, if you don't use the DORA metrics to measure agility then you're probably measuring compliance to some authority's rules for a development methodology. Which is a great way to end up with cargo-cult Agile!
That was some very unexpected twist when you first brought up Bill Gates as an example for "Famous Person but Bad Idea" and then Steve Jobs as "Disliked Person with Good Ideas". Made me actually laugh. R.I.P Steve, though.
I admire a lot of what Steve Jobs did, but I separate it from who he was, the people close to him say he wasn't a very nice person. He was often actively cruel to people, including his own daughter. His Biography is interesting reading.
So... where exactly did agile go wrong? Where is the link to agile? I must have missed it - or is the title just click bait? I love the juxtaposition of your arguments with scientific principle, but where does agile come in? For me agile does not contradict CD or TDD at all - they can go hand in hand. #confused
The point I was trying to make, and probably didn't make strongly enough, was that the "self-organising" principle in agile thinking is often taken, wrongly, to mean that every opinion is equally valid, some opinions aren't! So we need better tools to decide which opinions are worth considering and which are not.
I have just come to the conclusion it's not worth "fighting for improvements" when it comes to how the business is run.
Management, "Digitalize Agile."
Workers, "?????"
Management, "It's a corporate initiative."
Having had to use some of their software, I wouldn't say Google or Microsoft's software is "good"... in fact on some occasions i would describe them as "bloody awful"... i think the term you are looking for is "profitable" which is not at all the same as "good".
I don't think a video like this really helps in convincing the people who call you dogmatic, because I feel like you're not even aware of what their complaints are. If someone calls you dogmatic, and you pull out a video where you torture the scientific method to arrive at the same exact conclusion you were already convinced was true before you look *more* dogmatic, not less. Since you want to see the holes in your argument:
1. Some ideas might be dumb, but that doesn't mean they don't have any area of applicability: The earth might not be flat, but on a small enough scale, that's a perfectly valid assumption.
You might dislike waterfall development, but for example on certain environments it's pretty much *the only available option*, so trying Agile in those environments is just a distraction that wastes everyone's time.
2. Cherry-picked examples: You support your claims on how teams in successful companies have seen good results, without mentioning that other teams *inside* those same organizations have done *other* things and got good results as well. It's almost as if being a big company that can afford on paying extra for talent might have an impact on performance!
Showing teams in successful companies finding good results with these methods is just not enough. You'd need to show for example that they don't get good results with other methods, or that they at least get better results, and then *maybe* you have something. Then you just need to make sure that you can replicate that across companies, across teams, across projects, ... Not doing that is just cherry-picking.
3. Discrediting preferences: The problem with the argument being made is that on this topic *preferences are actually important*. I can *measurably* see in my work that I'm more productive when I have scented candles on my office. I am more comfortable with them and can stay for longer on my desk actually working instead of being distracted. That is *my* preference, and that same candle might be an annoyance to other people, so it's perfectly reasonable to infer that preferences (even silly ones) have an actual impact on performance.
Like in point 1, you're generalizing your conclusions without real evidence that they are general principles.
4. Aim to prove/try it: If you are being skeptical about your ideas, reports from other people saying they did not find the same results should be interesting and counted as evidence, not be dismissed with "you're doing it wrong". Doing that makes your argument on an unscientific one, since you're making it unfalsifiable.
The best comment by far. You cut right to heart of the issue.
Great comment, I also finished the video feeling it was a lot more dogmatic, not less.
I'm still convinced a good Agile team will be more productive than a good TDD team on most kinds of projects, and I dont even like Agile that much.
I don't agree with having to commit (and by that i mean pushing upstream) at least once a day, or N times per day, it sounds somewhat ridiculous unless the code is ready to do something. For instance, I could commit "bool function foo() { return true; }". Is that ready for production? well if nothing else is using this method it is ready for production, but it makes no sense IMO. Making these kind of statements in such a generalist way can be dangerous. You should only push your changes once you are done with them (making atomic commits is something else).
I pretty much agree with everything else though.
Feature branches vs. continuous integration seems like a false dichotomy to me. I don't see why a team couldn't streamline small, short features while continuously integrating. Oh, okay. It has to be integrated everyday. You probably should have started with that to reduce confusion.
They can, but what that takes is that you can finish each feature in less than 1 day, and that isn't how most teams that practice FB work.
@@ContinuousDelivery That's how all the teams I have worked in work ;o) A developer works on a feature, typically splits it into multiple smaller pull requests (not all at once of course, one after another) they get reviewed, they get merged, the work moves on. This is very typical workflow for many teams (including Google, although they do not use feature branches).
I think whenever you mention feature branches you're talking about something that some teams USED to do, and probably some teams do to this date, but that is no longer the norm in software development, where there's a big feature that needs months to be worked on and nobody wants to put it into the final product before it is finished, so that feature remains on a feature branch until it is finished after few months and then it gets integrated into the main branch.
I'm sure that there are some teams that do this very often, I'm sure that there are many teams that do this very occasionally, but most of the teams I worked with would not go this way and instead choose things like feature flags or outcommenting some part of the code or similar.
Obviously those months long living feature branches are a bad idea and I think even those companies that do it that way feel like it's a bad idea and have some other reasons why they have decided to accept the downsides of that idea due to the benefits that it gives them (such as legal reasons etc.)
16:10 But why is integrating once a day CI, and, say once every 2 days, "by definition" not CI.
Can't our team simply (re)define CI as requiring integrating once every 2 days? Maybe it makes more sense this way for our project.
If yes, then, how much can you stretch this until it objectively stops being CI?
I sense that this approach to defending a methodology based on evidence is not sufficient to make it happen at a broad scale. The main issue with TDD in particular is that it requires people to practice TDD in order to realise its value, since it is very unintuitive as to why it works at first. The main obstacle we all face in various industries is the group inertia which is instilled by the older more senior engineers and/or the way it was done and to resist that is very hard. Like someone said in another comment, it makes you feel like "a black sheep non-team player of the crowd.
another problem with TDD is that many people unfortunately apply it incorrectly, which then leads to writing many brittle test cases that break when doing refactoring. People then get the feeling that TDD only slows them down and eventually stop caring about it. More needs to be done to educate people how to write correct tests, this is still not properly understood even if people start to jump on the TDD train.
Fxied the title for you "Some people are just dumb"😂
Agile, the art of pumping out shit as fast as possible to strict dead lines, then running away, to the next project.
Potentially true haha! BUT whilst the water fallers are still gathering requirements and hemorrhaging money....the agile company and the 💩💩is earning money OR they've quickly realised the path they are going down is wrong so they stop and change direction
A lot of Agile Practitioners treat Agile like a religion - when the development is a success it is because of agile. When it fails it is because the team didn’t do enough Agile.
Is it common expectation that the title of a video is not related to the content? Or did I miss a segment of the video - how is this video related to a fault in Agile? I understand contra-agile titles get clicks, and so I can respect that creators making good content have to play that game - but I’m not yet sure if we viewers should accept that emergent result from UA-cam.
But here I am commenting - a behavior I worry will reinforce the use of clickbait titles. Do you have insights to share?
A non-dogmatic approach would be to say that you don't have any metrics showing that TDD or pair programming works better than anything else. They work well for your team because you hired developers who want to work that way. If TDD and pair programming worked better than the alternatives then after two decades there would be strong evidence for that. But there really aren't any metrics showing TDD and pair programming are superior.
Dave is right when he says that arguments from authority aren't sound. So why does he use Carl Sagan as an authority?
At 8:33 he says "...the necessity of having a model of why A works better than B that allows you to compare it to the alternatives."
Models need to be tested, though. When did you test the different practices of software development against each other controlling for all the variables? It's ironic that an advocate for test driven development ignores the need to test the practice he advocates against alternatives.
Dave cites SpaceX as an example of the success of TDD. But the flight control software for the space shuttle was written using waterfall. Why did he leave that out? These two examples show that very different practices can succeed in producing large, complex software systems. Neither is a reason why anyone should adopt those processes, however. This is a kind of argument from authority: 'SpaceX uses TDD so should you'
www.fastcompany.com/28121/they-write-right-stuff
Nope the SPaceX flight control software is written with TDD using Trunk Based Development.
@@ContinuousDelivery - Ummm, that's what I wrote. SpaceX was written with TDD and the space shuttle was written with waterfall. Unless you're objecting that I left out Trunk Based Development. TDD and Trunk Based Development are two different things.
You still need to come to grips with the fact that waterfall has produced a lot of very good software.
Also, no there is data in favour of both TDD and Pair Programming, more than there is against it, but the data is not good enough which is why I don't rely on it. Equally you can point to no data that saying NOT using TDD is better! So we are at an impasse, so Sagan's Baloney Detector is what we need.
@@ContinuousDelivery - And that lack of good data is the real problem. Given the importance of software development to the global economy we really need good data. Not data on student projects, but data on real projects done by experienced developers.
What is needed are controlled experiments on real-world projects using professional developers. That would give us reliable data. No one is even talking about doing realistic experiments on this scale, however. But we need them.
@@deanschulze3129 There were small scale controlled experiments, and they showed advantage for TDD. So where is your data too suggest that your approach works better?
I wish there was better data, but there is no data for either approach, what we do have great data in favour of is Continuous Integration, which tends to go hand in hand with TDD, CI builds better software faster than any alternative.
That’s a great thing to be called dogmatic about.
Your comments re branching are so odd to me. If you group changes in a system by the impact they will have you end up with a range from trivial to redesign. "No-branch" development will work at the trivial end of the scale but NOT at the redesign end. "No-branch" assumes a design is such that the code is partitioned in a way that changes are isolated within the design. At some point you will hit a changes that are problematic because of the design. In that situation why would you throw away the option to branch? That makes no sense at all, unless it's completely new project, that is surely worse than branches?
Well, in my experience anyone who pushes some sort of idea along the lines of: don’t do x do y is dogmatic. I mean this stance against branching is very silly. Torvalds made branching cheap for a very good reason, do we honestly believe we know better than him? I’ve also heard other people say: don’t use cherry-pick in your strategy. Say what? Again Torvalds implemented cherry-pick for a resson. Same with force push. Learn your tools well and use ALL the features if necessary!
"Don't hit yourself in the face, Learn new things instead" - dogmatic?
I guess that you didn't watch the video to the end? One to the items in Carl Sagan's Baloney detector - the core of the episode, is don't take "argument from authority" seriously, Torvalds made branching easier because his problem was managing changes to one of the biggest and most important open source projects, that is NOT the same problem as working in a team on some software.
Love the T-shirt
I find it depressing that there is need for a video like this ... I thought this was obvious.
Where agile gets it wrong, especially scrum, is that there is no peer reviewed empirical evidence to support the claims of agile proponents. Kanban, pair programming, as 2 minor exceptions.
If I'm only allowed 1 day to test my changes and verify them before I have to merge them back to "master" in I would go mental in one week. I can conceive of any change of any substance that would take one day to define, code, test and verify. What kind of software does Dave think people are writing? CRUD webbapps?
I write embedded system code and the environment with cross compilations, custom ARM hardware and testing that NEEDS human intervention as the system are moving motors and using sensors. There is no fully automated testing possible.
This is where Dave "triggers" people, It seems to me the Dave assumes everyone is writing simple, low complex, no technical debt free beautiful code bases. When in reality most of us is sitting on a mountain of crap trying to keep it from falling apart for one more day.
And only 1 day to merge creates a people problem, if you merge by 16:00 and the days ends by 17:00 and you thing breaks something, you now have a fresh shit-storm to deal with.
I don't get it. In the middle you say that if a tdd doesn't work for some teams it's because of lack of skill. And in the end you say that lack of skill isn't a good argument because it's not falsifiable. So why do you use it then?
In your system, doesn't it require almost all of the people on the team to be top developers who deeply understand the ideas you promote? Something you have at Google and Facebook perhaps but which is rarely a given in a regular company.
Maybe give us some examples where someone who commented on the channel, changed your mind on some fundamental topics ?
I must say, one area where you helped to change my mind, Is on shorter lived branches. I am now a fan of feature flags for new features that are incomplete, where possible. I only got round to your way of thinking, after some rebase headaches.
But "no branches"- let us say you are upgrading libraries, see a major set of issues, and then you need to store your work before going on leave. Where does this work get stored? Are you just going to merge the broken code? I feel branches are practical, and needs to be managed.
I'd always critically question if you really can't do such upgrades incrementally to prevent large pockets of broken code. However, I agree that there are some cases where one can't do them incrementally because they present a major set of issues that span more than one day of work.
But honestly, how common are these? I've experienced these cases when upgrading application frameworks, e.g. from .NET Framework to .NET Core or going from Xamarin to MAUI. This kind of upgrade happens once in many years however.
Whereas how often are you introducing new features that can be directly committed to main just fine? I don't know your work context but would it be weird to say that can be 90% of the time?
So in such a situation I'd propose to be pragmatic and find a balance. Use CI (no branching) as the default way of working. And for those major refactor/upgrade undertakings, make a rare exception that is well-known to the team and create a separate short-lived branch.
As you can read I agree with you that branches can be practical; they have their utility. It is the rigid black-and-white thinking of many people (doing either 100% CI/TBD or 100% feature branching) that blinds people to knowing when to choose the right tool for the job. Understanding trade-offs and being flexible can be difficult for many.
You're basically repeating the first half of Charles Saunders Peirce's methods of belief.
I appreciate this video, but it would help to keep the pictures on the screen longer instead of making you and your t-shirt the main focus ;)
I think genuinely people misinterpret what he's up to. And the difference of dogmatism versus being opinionated is helpful.
However, the title of the video does not match the content and continuous to give agile a bad name.
In between its more about "branching" vs. "working on develop branch continuously" where he then again proves the point that being more agile (short cycles) is the better way in general. To me it looks he is not arguing against Agile at all.
Try arguing for a stiff hip and I will let go of the idea of Agile. ;)
The title of this video is pretty misleading. You only mention it in the video description.
A better title would have been: a message to all my h8ters
I didn't mean to mislead, but I think that I didn't make the point about Agile as strongly as I meant to. I was referring to the equivocation that all viewpoints/ideas are equally valid, and I don't believe that they are. So, what we need in that case is tools to help us to decide between the bad ideas and the better ideas.
This channel has become a series of bait videos, seems like you just want to make people mad so that they engage, commenting, UA-camrs reacting and all that
Very sad to see this.
Agreed.
...and yet.... you're here engaging.
@@McGrigorNZ - We keep hoping that Dave will break out of his cocoon.
Bait titles I would say... the video content is good.
This is a kind of clickbait. Self organization is also about decision making. If the team allows that everyone has a veto on every idea, then the team has an impediment to solve. Nobody would argue that cars are bad because some are driving on the wrong side of the street.
Mike, I believe I see a fallacy in your reasoning.
Take this statement: "no human can run 100 meters in under 12 seconds" is clearly wrong.
What about this: "no human can run 100 meters in under 9,6 seconds"? fyi, that's also wrong - but only ONE person can do this.
Your statements make perfect sense in a world (and teams) full of worldclass devs, in a company that has already completed the modern age mindset shift (ALL companies are IT companies) and where time and money are not scarce.
Good luck telling your message to developers who write C in notepad and dont use version control.