I think there is a little bit of a disconnect in this discussion. Jim is talking about software design, and Bob is talking about development practices. TDD is test driven development, not test driven design. You can, and should have an architecture layed out to accomplish the goals of the program. Then you write the code with TDD to fulfill the design..
"Writing unit tests in advance / TDD allows you to create an inherently testable architecture, because it is already tested by the time it is done" is a common claim made for TDD. Therefore, while it is called "test driven development", it is no wonder people equate it with "test driven design".
Bob is speaking clean as he is coding clean. His speech is so clear and easy to comprehend, that dumb can understand him. Jim is too fast for my taste and is not controlling his thoughts to be simple and clear as Bob is doing. Bob is a great theoretician. His theories are sadly, very truthful. We developers don't have a problem with the facts from theory but with the appliances of practices to fulfill that theory.
Jim also talks about how TTD reduces velocity in half - this is really a poor argument because with the TDD you actually much better preserve that velocity as the project grows in complexity. It is true that TDD will initially reduce the productivity of the team and for a relatively small projects this might be an issue. For a larger and more complex systems, however, TDD will actually improve the productivity in the long run by helping with the velocity where it matters the most: during maintenance, extension and refactoring.
It's a common fallacy that writing more code makes you slower, TDD actually makes you code quicker because you get instant feedback on what you are writing. If your dependencies are loosely coupled, the architecture changing won't impact the tests that much. They are just a unit that you can move around and reuse where ever you want. If you gutted the whole application, then yes they will fail, but they are meant to
@@bmurph24 With TDD, you test behavior, not implementation. Refactoring, by definition, does not change the behavior and therefore should not break any tests. This is the thing that many people get wrong. They test specific implementation which leads to the refactoring friction you described. Tightly coupling unit tests to the specific implementation is also responsible for the velocity degradation Jim mentioned.
@@milosmrdovic7233 Unfortunately it's pretty hard to ensure a test is just testing behavior. I think because the requirements and your own domain understanding changes over time. The interfaces I've created to define that "behavior boundary" (or whatever I've used to define that boundary) turns out to be wrong. That triggers a refactoring which fundamentally changes the boundary and then I end up having to re-write tests. TDD says "this is normal and good" and I suppose it is but it's a lot of work and one of the benefits I was hoping to have with TDD is confidence during refactors, which is somewhat hampered by the fact that I had to change the tests. I will say this is mostly a "unit" level problem. "Integration" tests have this problem less. There seems to be a lot of disagreement about the distinction, but I'm going to to say integration tests are defined by a library boundary that you must adhere to for fear of breaking clients who are defined as people who are not my team. Most teams seem to use "does it do I/O" or "how fast does it execute" to define the distinction, which I think leads to implementation specific tests. When there's overlap between the two it's easy to look at the unit side and think the general behavior is covered by the integration test, so lets dive into the specifics... which is the implementation. And then the code coverage tool also says your integration tests missed these branches so it further encourages you to write those implementation specific, mockist tests. Or if you write tests to cover bugs as you run into them, those tests in theory should verify a behavior but in practice they dive into implementation and will break on refactor. And it depends on what kind of projects you're working on. UX heavy and I/O heavy projects are very hard to test effectively but libraries that are essentially pure functions are very easy to write tests for. It's actually fun in the latter case. In the former, you fight with selenium, then find out your mocks are out of date or the dockerized version of your dependency isn't acting like its production counter part and so on. Then I try to break my UX and IO heavy project into just functions, but that again is a trap where I can accidentally start testing the implementation. For (trivial) example, writing a test for all functions that could show/hide the widget instead of rendering the widget and verifying that it shows/hides. Overall, in my experience my tests do reduce bugs, but they also add to the work load on most projects. That never really changes over time. At best its a wash. I think tradeoffs needs to be more prominently mentioned as part of TDD instead of the belief that it will always be a net gain in productivity and that if it isn't its a failure to implement TDD properly.
You can have successfull projects with and without TDD. But with TDD you have testable code. I think a lot of people have trouble using TDD because no-one said that it is actually hard to master. Learning the steps of TDD is easy, but mastering it is hard and that's why a lot of TDD projects fail imho. It would interesting to know how much time Uncle Bob investing in mastering TDD. Maybe it was just natural for him but the sad truth is, that for the majority of programmers it isn't.
The trick seems to be that "testing every line of code" gives you TESTED code, but it is going to be rigid, brittle, it will break. People have since found that instead of testing every single method directly (as mentioned "at procedure level"), you can instead only test high-level module public APIs (and coverage will come from testing said modules): ua-cam.com/video/EZ05e7EMOLM/v-deo.html&feature=emb_logo
@Oleksandr Kovalenko No it won't. If the tests verify the behavior, then as long as the internals reproduce the correct behavior, the tests will still pass
if TDD is hard to master and can make your project fail if not done properly, I would want to stay as far as possible from TDD. Sounds like a very risky investment.
In a blogpost he mentioned a story about a workplace with people coding on treadmills and super energetic coworkers showing him tdd. So he was taught in privileged environments
i've got more things out of this kind of discussion rather than political one where people just strawman each other. yes, it is long. but you don't cut the intricacies of such discussion.
Great talk. I want more. Correct me if I am wrong, but in TDD you are not aiming to hit that 2^32 (or whatever) state-space, but you are aiming to hit that 10-100 state-space of your domain (for each unit). Sure an address can be in almost infinite logical states (with a few string properties only restricted by memory), but really domain-wise it might only matter if its valid or not. That's the state space you want explored in your units that depend on address. Then you might say: well interacting units with 10 or 100 state-spaces will quickly blow up to intractable to test, and that's true from a top-down view. That's why units tests (bottom-up) are the essential tests.
Jim frequently mentions telecom systems and driving architecture based on well defined domain knowledge. The biggest challenge in software development is in fact poorly defined and ever changing domain model. Since there is no "magic" architecture thar can protect you against an arbitrary set of new unanticipated requirements, without good unit tests, the process of maintaining, changing and refactoring any complex piece of software is an absolute nightmare, extremely expensive and time consuming. This is by far the strongest argument for practicing TTD.
do you actually need tdd to write unit tests ...? ive met some really great programmers who write their tests after they finish an implementation - they all produce really great code which is also well decoupled and properly tested.
@@robdoubletrouble Bob Martin proposes that you "need" TDD to write tests so you don't have holes in your test suite. Most programmers who write tests after the production code don't write enough tests, so the tests cannot be relied upon when refactoring or enhancing large sections of that code.
@@BlitzMaul The holes are a major problem because they undermine the key benefit of having the tests in the first place: the refactoring safety net - the freedom to change anything without worrying about regression. TDD rules effectively eliminate the holes, IF you follow them without exceptions. They force you to think from the perspective of a consumer of your code thereby increasing the quality of your interfaces. They force you to control the dependencies early which eliminates unnecessary refactorings. They also force you to think about edge cases, where most bugs lie. TDD is not a tool, but a practice that minimizes the number of defects in your software and increases the quality of your implementations. Now, there are many practices in different professions that ensure certain favorable outcomes. For example, there is a practice that every surgeon undergoes in order to minimize the risk of severe infection: washing hands and using gloves. Imagine a cardiac surgeon who does not wash his/her hands before handling sterile equipment nor use surgical gloves while operating on an open heart. Even if he/she is the most knowledgeable surgeon you know, would you still consider him/her a responsible medical professional? Would you call that professional at all? But this wasn't always the case. Only 150 years ago in Vienna, the cradle of the European culture, one in every 10 children would die at birth. Surprisingly, that was the accepted norm that nobody paid too much attention to. Then, the doctors discovered that just by washing their hands, they could prevent 90% of those deaths! This is how the practice of washing hands and using sterile equipment was born. The above example is exactly analogous to TDD in software development. It's like washing hands. It's the combination of competence and adherence to established principles and best practices that distinguish professionals from amateurs.
@@milosmrdovic7233 Emm, not professionalism is when you know and understand your tools, their advantages and disadvantages, I am not arguing TDD is bad, research has been made, and at best it shows that it might be good, and at worst, that it has no demerit, but all papers conclude, that it is inconclusive; which in my book is good. It is a fact that there is no such a thing as a silver bullet to every problem, if you fail to see that, an/or forget that, every tool, is that, a tool, and a tools have purposes, Maslow's hammer is an excellent rule to keep in mind when thinking about this sorts of things. Do not close your mind, and become over reliant on tools and methodologies, there is a reason why every single "perfect" one has changed over the years, a professional, is not defined by TDD, it is defined by its ability to understand the whys, the hows and the tools to solve a problem and pick the best for each specific case.
@@luxiax1836 I have to disagree on that. A tool is an instrument used to carry out a particular function. A programming language, ORM library, Web Framework, or NoSQL database can be considered tools. A practice, on the other hand, is the application of an idea, a method of doing something. TDD, by definition, is not a tool, but a practice or a method. TDD is the best practice for minimizing defects in software. It isn't perfect, though. The main problem is that it requires people to have skill and know what they are doing. Many people are not willing to take the necessary time and effort to master it. So what are the alternatives? Design by contract is awesome, but as Jim pointed out, the "technology is not there yet". No one in their right mind would rely on automated code analysis to find bugs in their code. What else do we have?
My takeaway from this discussion is that you first have the high level system design/architecture in place and then use TTD to drive the low level design(domain model) with the system architecture.
This debate is really good, and really high level. The best part in my opinion is the one where the come down to real discrepancy over whether TDD is better than Design By Contract, where Coplien argues from a "lean" point of view for DBC and and Martin for TTD from a "professionalism" point of view. I think Design By Contract seems to better but today it unfortunately is not implemented in many languages. As far as I know it only exists in Java, but maybe im wrong?
Only if you know the domain well, the architecture of the system you are trying to model can be successful. Otherwise, its just try and miss (until you learn about the domain :) and everything else, TDD or not, will be a failure. So, I do agree with Jim!
The problem is you'll never obtain complete domain knowledge because of the ever changing nature of business rules. Without unit tests you simply cannot cope with the inevitable software change.
Hmm, I guess 2 years ago when I wrote that, my knowledge as a programmer was such that this video wowed me and the topic was a bit overwhelming to me. Not that it doesn't impress me anymore anymore. :)
Wait up. Design by contract is great, but it runs at program execution time, so it doesn't stop things from exploding in the user's face. Therefore it is not a replacement of automated tests, which are executed just after compilation. I mean, if you are using contracts, you still need to write tests to catch errors before runtime!
I think Jim Coplien pointed that out when he said that the tooling around contracts isn't as advanced in most languages as it is in Eiffel, where the contracts are checked statically (ie., compile time). So it's not really something can be done only at run time.
Coplien's point is mostly theoretical and probably better, but Martin's is practical and actually useful today in all languages. I experimented with ESC/Java, which enforces contracts statically via a theorem prover. I felt the potential, but it had too many issues to be practical. I thought at the time that it didn't get you there 100% as there are some contracts that can't be reliably confirmed. I thought they should require a unit test to confirm those unprovable contracts. So, you'd use contracts for most code, and just a tiny few unit tests to confirm the unproveable to the theorem prover.
Great debate and kept moving but offered some light. I’ve seen issue with my own tdd learning to how to plan db and architecture when a team is working on a project. Unless you also have an architect running in front laying breadcrumbs and scoping the work. Can be just a sprint or two in front though.
Besides this being a very interesting insight into TDD, Agile and Software-Development in general, how about that magic moment starting at 14:18, when they both start to raise (and flex?) their arms above their heads ...
You mean the two male gorillas posing to gain an advantage over the other? That's the weakest I have seen either. It tells you that they are both not so sure about the bullshit they are talking.
@@lepidoptera9337 Yes, exactly like that. :-) In my (lame) interpretation it looked like they tried really hard to impress/dominate the other... and that (unconscious) show of prehistoric behavior made me chuckle. ... or maybe that is just over-interpretation ;-)
@@weasel7581 No, I think my inner gorilla can still tell when somebody else's inner gorilla takes over his non-verbal communication. I was just waiting for the chest beating and feet stomping. We may even have seen some two-legged upright walking, if the two gentlemen hadn't calmed down a bit. :-)
As this has been years, I really wanted a follow up debate from these two. I flagged them down via Twitter and they both basically told me the debate is over. I've no idea what that means.
@BryonLape I'm reading a book examining the research on the effectiveness of various software development methods and the chapter on tdd said the evidence shows modest benefit for tdd.. but you often hear that there's more test code than actual code and that doesn't make a whole lot of sense to me. So I think there's a bit of art involved in getting tests at the right level of granularity.
My experience as a team lead working for 10+ years with 100+ software developers/engineers. TDD has not produce better code or had lead to less issues at all. It’s even quite the opposite. People, I met over the time who had a strong position on TDD wrote most times worse code and produced more issues 🤷♀️But at least their test worked 😒 So, I am totally with Jim. TDD helps not to be more professional, it’s a pseudo professionalism implied by TDD. And Uncle Bob is in my eyes a salesman, who sales ideas (mostly from others) for problems which are unsolvable. Like the Fitness industry selling you how to get in shape with the next best fitness trends. Uncle Bob has no real field / professional experience at all. As he said in one of his conference talks and I paraphrase, „I am writing here a little test … then there little code … her again some tests …“ If you want to become a professional in this sector is like everywhere, hard work, discipline, and lifetime learning. It’s not practicing TDD.
Got to be honest, I prefer Bob's more personable interface. Jim is quite tactical and it feels like he wanted to be right, more than explore the space. I'm ecstatic this lasted less than 30 minutes as frankly such arguments usually waste much more time and accomplish very little. If you have a language where asserts are present and are compiled and not stripped, the problem is runtime performance. If you have test cases where you are throwing random arguments against the test-suite, then you are not doing TDD. Property based testing isn't random arguments, and should inform concrete test cases, each time a break is found, you add another test case. Perhaps you know of some test cases that worked in the past and you write those when all else is failing. Bob's is more practical, and Jim's argument about recovery seems a strawman. Recovery might start as a class facade, but eventually can only either be parked or extracted to a separate system.
Jim Coplien totally nailed it! Both guys really rock! I agree that design-by-contract is really useful paradigm. I started using the D programming language precisely because it supports contracts and class-invariants...
20:20 Bob: who is it who first used DD with some letter in front, we have CDD, BDD, TDD? Jim: dd? Well, dd was a Unix command for disk dump. :-) made my day!
2 million lines of code being pretty small is a very smug remark. Working on projects with hundreds of millions lines of code sounds like gloating. If you have that much code you probably don't have that good of an architecture. I'm surprised Uncle Bob didn't call him on it. Uncle Bob looks much better in this debate, he's calm and humble.
He also said C++... if you want to create programs that work don't write in C++ especially if they are 100 million lines of code? That doesn't sound possible...
In a way, it appears both men are talking around the real problem - Inexperienced developers. As Bob notes elsewhere, a good 50% of the developers out there have less than 5 years of experience. I have taught programming in post-secondary for 20 years, and my general thoughts are that we're the ones who have failed in improving how and what we teach students regarding programming. I could say more about our role in creating and perpetuating the problem, but that would be a looong post.
One more thing about DBC that's powerful ... its contracts are inheritable! There's no such concept in TDD! You could do both since the two don't conflict, but why? You can do something that's leaner and more effective ... DBC. As the video mentioned, with TDD, you expand the code base dramatically, and introduce more bugs! Also, in TDD, where's the information that shows at a glance the limitations and expectations (contract) that one should have of a method or class? It doesn't exist.
Loved this debate. Didn't seem like they actually disagreed on much in the end other than minor preferences. Jim prefers contracts and Bob prefers unit tests, that seems about it, but both have great merits. Design by contract sounds interesting with its broad coverage of "tests" on the entire range of arguments, something that tests can't do. But I still wouldn't want to ship my code without at least end to end tests and integration tests, to actually see the code is definitively working as expected, at least in particular tests. I think unit tests are important too, as long as they're done right so they're not brittle or not actually testing anything.
And not much that you want to take home because most of this doesn't work in practice. I mean, sure, if you have to work on a domain specific software with a team that doesn't actually understand the domain, already, then, yeah... but that will blow up in your face, anyway. This level of advice is like asking a person who has never seen a grenade whether they want to pull the grenade pin with their left or right hand.
To me Coplien won this debate. Its a pity what Agile has done with the industry nowadays. Such deep thinking debate is unthinkable. Just STFU and ship me some code... whatever it is... thats the level of todays IT industry.
Just an observation. I get the points both of the guys are making. Full disclosure, there are very few disagreements between with what Uncle Bob proposes and what I would encourage. My main comment is that I believe the adversarial, who is right and who is wrong is not where we want to be. If tou disagree with me, that is a positive, we need have the seed of a conversation.
14:18 Uncle Bob: [Professionals] "practice TDD." Jim Coplien: "A professional to me is a person who makes money for doing a job in that area." Poincare once said "Sociologists discuss sociological methods; physicists discuss physics." Now guess who's the sociologist and who's the physicist.
Jim brings up "Professionals" to counter Bob's argument about "Professionalism" (equivocation fallacy). You fail in the same way by misquoting Bob with your bracket summary [Professionals]
@@colerees3965 Despite using the wrong term, this is like trying to win an argument by being pedantic about terminology without actually acknowledging or addressing the argument. This is childish unless you know the person and are just joking, that is, not taking the discussion too seriously.
I'd rather have messy tests than a messy legacy code. You could improve the project by writing production code and fail the unit tests then improve the tests.. The tests will get eventually better. But if you only have a messy code in front of you without tests, you really have nowhere to start. Every time you touch the messy code, it will become even messier.
How true. I find myself on a contract like this at the moment. Any unit testing would have been welcome. My approach with poor legacy systens is to write contract and system tests then work down to unit tests, often rewriting the production code. The contract tests then tell me if something broke.
15:44 the definition of unit test that Jim had is kind of flawed. A unit test is just a test of a component in isolation. Which is useful to ensure that certain scenarios can be quickly confirmed by a developer and the build system if it is automated. The API testing he is doing is more of an integration test because you're testing against a black box. In that factor I also want to have them done because they can bring up issues sooner than later. 17:55 Eifel does those checks at runtime there's another language that does static compile time testing called EVES which I used in university. 19:33 the problem with the coupling of tests and assertions in code is the cost of running those assertions at runtime. Though it may be moot if you can afford those high powered AWS instances. Not only that, there is going to be a cost to handle those assertion failures.
The argument for black-box testing over white-box testing is becoming more and more prevalent. ua-cam.com/video/EZ05e7EMOLM/v-deo.html&feature=emb_logo
@@Zhuinden my view is you need to do both. The white box tests allow for rapid tests during CI builds.. You don't want to wait 2 hours to find out that your test fails. Secondly with whitebox you can mock lower level constructs which you cannot really do on integration tests as easily.
14:18 pay attention to the body language. Jim exposes his chest assuming he's comfortable, then Bob raises his arms trying to show himself bigger, and Jim takes a normal position, followed by Bob. Very interesting how they feel they're debating a critical topic. Later, see their legs, Jim trying to show himself as the alpha male and Bob keeping an interviewer position. This is a fight between attitude and knowledge. I personally think Jim took the TDD idea wrong. One of the key pieces of achieving a good design writing the test first is, precisely, to define the contracts.
Great video. A really good eye-opener for me about TDD is this video : ua-cam.com/video/EZ05e7EMOLM/v-deo.html Very well explained what Kent Beck vision actually is :) Most of the people don't even know what a unit is and what TDD is all about. TDD is about testing behaviours, not a method, not a class. I think Jim and Bob are disagreeing mainly because of that different point of view : what is a unit under test?
Jim was talking about the big macro, Bob was talking about the small micro, they're not actually against each other, but mostly debating different areas. But like Jim, I find it hard to accept TDD as a religious dogma, you write 2 million lines of production code then you also have to write about the same LoC for tests. If you work for a small startup that has to deliver code in a fast project (3-6 months), Bob's TDD is like committing suicide, the primary requirement for non-TDD development to work in a setting like that is that you have to hire experienced programmers that can communicate in the same frequency. There will be less bugs in TDD of course, but as Jim said, you also cut the velocity in half. Great architecture at front is like a monarchy (they do evolve but incrementally), disciplined TDD is like village-level democracy all the way up to the central system. They don't have to compete against each other. If there's anything in the debate here, it's the focus of attention. When it comes to implementation they're both agree, write the skin, not the meat, let the meat evolves, you write interface but only with meaningful abstract methods that inherently describes whatever that interface is, be minimum, no speculation in your architecture, object, or interfaces, just the thing you really know.
personally use both contracts and TDD, the problem with contracts is the contract not tested until its run, I also like fail early which contracts on construction give you
my main reason for using TDD is a lot of your code can then be forgotten about (until the next refactor and you will refactor) and the more complex stuff concentrated on
Looks like Bob's stance is to hold as close to pure TDD as possible, because he understands there is going to be some leeway given to experienced Architects/Teams to understand that the Architecture is fluid. Jim is just more comfortable to express that you have to more frequently and initially look at the Domain and build some Domain objects. Not code, just objects, properties, ctors, contracts, and their relationships. I still side with Bob, mostly because I know that wherever you set the bar, the flag will wave 30% either direction. So set the bar high, and let the people "under" the bar still actually be within reason. But both of them are correct in multiple respects.
They were saying that the test mass is about the same as the production code mass ? What ? My experience is that the test mass is bigger, if not it means the code hasn't been tested enough and misses conditions. About lean, I've found that if the test code also isn't lean then at later iterations of refactoring you run the danger of having to spend as much time maintaining the unit tests as the code.
I remember having code mass < test mass in a project, but that was mostly because it was extremely critical to get it right. It was in python 2.6 so we needed to test even third party modules (even standard modules had methods that just returned without doing anything, goddang bonkers)
@@kbrnsr I think it also depends on the sophistication of the test harness. Modern, expensive tool-chains write and maintain much of it automatically creating test code from scripts. I'm stuck with Nunit which is pretty basic.
@@nickbarton3191 I thought new stuff like xunit would be backwards compatible with nunit (no experience with .net programming) Also I found a blogpost from Bob Martin's Clean Coder blog from 2013 (blog.cleancoder.com/uncle-bob/2013/03/06/ThePragmaticsOfTDD.html) which had this line that felt pretty relevant to what I was trying to do: --- I usually don’t write tests for frameworks, databases, web-servers, or other third-party software that is supposed to work. I mock these things out, and test my code, not theirs. --- --- Of course I sometimes do test the third-party code if: --- --- --- I think it’s broken.
@@kbrnsr just haven't the time to change test framework, not sure if it will give any significant advantages. The tests are deeply coupled to the legacy code which is bad but as I'm steadily refactoring I'm also refactoring tests too which are getting cleaner. They take too long to run for unit tests, I'm starting to run subsets of them. None of this is easy on a legacy project which wasn't well designed. I started with a file of 8000 lines of just global variables, now down to 160 ! I wanted to buy VectorCast which creates mocks automatically and has a UI for test definition. I notice in Bob Martin's article he says he creates mocks by hand.
Coplien is stuttering, contradicting himself, and running the conversation off into all kinds of other areas, all classic signs of someone who doesn't have substance to their argument. The part about "Oh but Eiffel..." is particularly laughable considering he mentions he uses C++ for most things. Does he even use Eiffel?
Really? I didn't hear that. Instead I heard a bad argument about banking accounts, where he presented that only one user of the account was to be serviced, and so no other user of the account was serviced. Anyone who takes the "simplest" thing not account for 2 users has made a mistake in requirement gathering, not in the mechanics of how the code was written.
it seems to me that design by contract in eiffel force every object, every procedure to have dependency on its preconditition,postcondition, and its invariants. jim thinks it is necessary coupling while uncle rob thinks that it needs to be decoupled from production code. i'm with jim on this one. CMV
I definitely agree about Design by Contract. It's definitely the way the go. Assertions/contracts ARE part of the design so I like to see them right with the code. With that said, you can emulate DBC features in an OO language just fine. You just have to follow certain conventions just like you can write clean elegant OO code in C (not ideal), although you have to build the OO facilities on your own.
Great talk I want to address one point, which is that you code faster following TDD, because you get instant feedback as you write. If you write tests at a too high level and you have many complicated relationships, it can be a nightmare to track down what is going wrong and where
I admire Jim but I think he is wrong on this one. There is no reason to consider TDD as an obstacle for a good design, by the contrary, many times TDD raises hidden design problems to the surface.
the only thing that jim wants in order to use TDD is more domain knowledge of the system you want to do, if you start writing a bunch of unit tests without this knowledge you are going to tied yourself into an arquitecture that may be right or may be wrong and if it's wrong the way out of it is going to be painful to do
To me it sounds like claiming practicing TDD completely ignores the domain knowledge. Of course it doesn't. Also, you do have to have an initial idea of the system architecture. Otherwise you wouldn't even know what test to start with.
"You are not a professional unless you are doing TDD"... How ridiculous. Thank you, Jim, for not being bullied into agreeing with this. That's the kind of egotism that hurts the industry. If it doesn't work for you then you must be doing it wrong. There is no magic bullet that works BEST for all use cases. Study the craft and figure out what works best for your project and team. It may be TDD and it may NOT be.
(1/2) Seems James is a bit too flippant about unit testing. The key is you MUST acquire domain knowledge, otherwise your unit tests are going to be a waste of time in a project of any reasonable complexity. If you apply domain knowledge (often that's outside of CS) you will have a good idea of edge cases and exceptions, so running a million parameter variations to test a vast space of 2^64 settings or inputs or whathaveyou you will be able to catch the important bugs, provided almost all of the rest of the untested parameter space is "typical". In most domains that's true.
Basically when you are developing a system with top-down architecture design, doing TDD is not that much useful, at the start, since you need to devise what relevant objects and business units you have to use. And then based on SOLID principal figure out how to divide responsibility for these units. Afterward unit testing would become logical.
+David Gibson "top-down architecture design" doesn't exist. It is just "top-down design"... and certainly it makes no sense to write tests on the top of the mountain if you won't be able to run them until the project is complete... So the question should be "do we use top down design or bottom-up design"... and the answer is both, a strategy which is also called "meet in the middle", like the tunnel between France and England. The idea is that in the bottom-up code you can write all the tests you want. In the top-down design you map the functional requirements into design decisions which are then translated into top level code and you decompose it further into smaller functions. If the design changes, it is the top level code that changes, but the lower level code remains. If you have coded most of the project, maybe you will have to throw away 50% of the top level code... But since the code is a pyramid... the top level code is just 25% of all the code... and even if the requirements change and half the top level code is wrong, that is only 12.5% of the total code... And the new code will look like the mountain where the Grinch lives... TDD is useful for top level code because it must test major functionalities...
I don't think Coplien knows what TDD is. There's nothing keeping design from changing because you wrote your code using TDD. Your APIs always come out better when you use TDD, and architectures evolve. Listen to Uncle Bob.
I don't think that's his main beef with TDD, I thinks it's more about "unit tests not adding any business value while reducing velocity" part which bothers him the most
Maurizio Taverna True, I don't know who is he. I know only one book: Lean Software Architecture for Agile Software Development. And it was a painful read with many fallacies in order to prove his point.
I'll break it down for you. TDD is not a testing methodology. It is an emergence design tool that drives fundamental decisions about architecture based on simple assertions that could very well be false.
When he says "Savings accounts are not objects," it seems to me to indicate that he doesn't understand what an object is. Yes, a savings account has lots of collaborators, but that doesn't mean it isn't an object.
Starting at 12:10, Bob's question was precisely "How long will you spend before you started writing executable code, on a system that will eventually be around 2M LOC?", and Jim's answer was that, considering it's a telecom system. he'd start by making the constructors and destructors to mark relationships between the objects, and writing a test to make sure the memory was clean at both ends. That would take him half an hour. Not 2M LOC. Hope that helps!
Great to c 2 minds debating on a controversial topic. Although, I do not fit into their league , I must say uncle bob's theory perfectly suits for backend models and processes - may be a B2B. and Jim's B2C..lean validation..no point in building a great architecture when customer doesn't want to buy it.
7 років тому+6
The other guy than Uncle Bob is just repeating random stuff that come to his mind. His points are not much TDD related from my perspective...
By doing the stuff upfront as described by Jim here you are calling a telephone system a telephone system upfront surely? And the developer/architect is not the actually telephone engineer that understands the system fully upfront. Saying that it’s not that I disagree with Jim, what I’m saying is capture that stuff upfront in lightweight visual docs that can be modified if needed - just don’t write the code upfront. The architecture upfront is definitely required but implementing it down to the interface before any “coding” is a slippery slope. Ideally the architecture will describe the connections - like a city planner but not the actual detail of the building. The only way Jim’s argument makes sense is if the architect is the actual telephone systems engineer and 99% of the time this is never the case.
Coplien seems to never have heard of DDD. Architecture evolves. In order for it to evolve you need to keep learning the customer's language. Obviously, as your understanding of the domain evolves, your representation of it in code will evolve too. But that's a gradual process. And that gradual process becomes a lot less painful and a lot faster if it's supported by a massive unit tests library. Good luck evolving the system at a similar pace when you rely on a similarly comprehensive library of higher level tests. Coplien's hit and miss vs complete coverage argument is a straw man. Contracts are a very interesting concept, but in essence unit tests are a more flexible and real, usable tool to do the same things. Your classes themselves not being traceable to business domain objects denotes bad architecture. As long as your code follows the business domain, as DDD says it should, your unit tests will definitely be traceable to business requirements.
Still not sure how it's: tests=contract. I get that they're examples, but then if that's the definition, shall we just say any code that interacts with it is a contract? I feel that's a bit too loose.
If you don't need something why would you add it in the first place? I have the feeling people start TDD because they don't know what they want to accomplish and then they start to build up the software using tests and in the end they get a lot of things the don't need. From my experience I can say that a lot of people are forgetting the domain, which should be the base for TDD.
if you don't know what you want, and because you think TDD will waste your time to create a lot of things you don't need so you directly write the production code. Then you probably will get a lot of PRODUCTION CODE that you don't need neither. :p The problem is: we need to figure out "what we really want". And I think TDD is a great tool for helping you find that answer.
I always ask this questions about TDD ; How can we make sure that the unit test its self is correct , how do we detect a bug in a unit test Assume some one wrote a test that always passes , how often are we going to be tempted to revisit this test because we recognized that it's always passing ? In my experience we usually revisit a test , only when they fail . This is why I personally see TDD as a little bit of wast of time & resource ..
100% unit coverage seems artificial to me, I'd rather have good coverage on all components, about 60% on plumbing stuff and finally a lot of black box (end to end) tests. I don't need an exact unit test to tell what is wrong, the e2e tests will show that and quick debugging would lead to the issue. 100% would just make you constantly rewritting tests for any small changes without much benefit.
Jim is a C++ programmer, but Bob is mostly a Smalltalk programmer? In dynamic languages its easy to turn REPL sessions into tests, and its easy to mock out code, and tests are simple enough that they can check more errors than they introduce. In C++, it may be more likely that your tests themselves can have errors than the code itself you are testing. Also 100 million lines of code sounds ridiculous, you shouldn't need to build such big applications to make any money : you can make GUI operating systems in Smalltalk in only 1000 - 5000 lines of code.
Bob is proficient in many languages, both static and dynamic. A lot of his Clean Code series is done in Java, and he talks about past experience in C++ as well. I have never heard him give an example in Smalltalk, so where are you getting that he's mostly a Smalltalk programmer?
James is being far too polite calling out this guy's absurd claims. Bob says shipping a line of code that has not been UNIT TESTED is irresponsible. He then immediately suggests TDD can prevent code being shipped that wasn't TESTED. It's not a coincidence that he drops "UNIT" in the second half of his sentence. That is a tactic employed with great effect by religious fundamentalists: child-like conflation of topics. Yes automated testing is often essential to success; no, you don't need to isolate every class. As James says, bugs occur between objects far more often than within them.
TDD always seems to me that its just developers wanting to write code right away. Instead of thinking through the problem first. And you dont need to test every line of code with a unit test. What about GUIs ? Even just running the program is going to "test" most of the code by executing it at least once. Thats why most unit testing is waste, you test things that cant possibly fail if they have run even at least once. No point in ever running that test again! And you could probably easily say yes 20% of the code in a project get executed through its entire path, just by running the software at least once
Have you ever walked into a project, knowing nothing about it, but being responsible for maintaining it? It is so much easier when the code has meaningful unit tests than when it has no unit tests. Unit testing gives a glimpse into the mindset of how the code was developed, and what things the developer expected to go right and what could go wrong. People who write code, throw it over a wall, and move on without writing unit tests don't appreciate the next person who has to maintain the code. TDD is about maintaining quality code over time, and this is where Uncle Bob and Jim Coplien disagree (in this video) on their definitions of professionalism. Professionalism is not about whether you make money doing a job, it is about the quality of work that is performed and how to make sure the client is actually saving time and money by using your services rather than just getting it done quick and dirty. Most inexperienced clients, who are new to having systems built for them, only care about a delivery date and initial project cost. They have a static mindset, because they are probably looking to sell the project and move on to the next big buck. I've been in startup-land for quite a while, and I see this is trend. The professional developer thinks about extensibility, performance, maintainability, and scalability, and how to avoid complete rewrites of systems. Writing unit tests is one of the ways to accomplish that.
Any developer that treats developer techniques in a religious way is not only wrong but dangerous. The answer to almost all programming questions from intelligent developers should always be “it depends”. Saying that writing unit tests in every situation is obviously wrong, without even looking at the specifics. Uncle Bob has obviously drank too much of his own kool aid.
It's 💯💯 clear Jim doesn't understand the TDD Workflow / Approach. Most likely he's only read about it, heard people talk about it. Because if he actually tried it long enough and truly tried TDD, his arguments would not even be brought up in this conversation because he'd clearly see that those arguments are BS that he's brining up against TDD. None of the arguments he's brought forth in this talk has anything to do with TDD causing issues with what he is claiming to surface. T TDD does not cause bad design. Nobody says you can't think a bit of design up front before you TDD using things like CRTs, etc. That's a bogus claim by Jim. You do think a bit on the whiteboard, even before you TDD at times. And TDD does not cause bad design. YOU cause bad design when you apply whatever other practices during your TDD workflow (DDD, Design Patterns, Clean Code, Database Design...your BRAIN). TDD does not tell you how to design. It guides you only. It gives you great feedback on your design, on if it works, and much much more. Might be a smart guy. But totally clueless about TDD.
Bob Martin is totally right on "Real professional is someone who practice TDD". I would never let go a programmer in my project/company who does not practice TDD. At least he should write unit tests even after production code. It is such a valuable thing that "forces" you to write beautiful code.
TDD and the love of it does not "force" you to write beautiful code. Like Jim states in the video. I have seen so much garbage code and tests that are written due to "TDD".
From my experience TDD lead to garbage code when a) you focus too much on implementation detail in the test and b) you forget to refactor when you have to. I could just say "a fool with a tool is still a fool", but in the end most people have trouble with TDD because it has a steep learning curve. TDD beginners usually/hopefully think about a proper requirement given by the domain. But instead of writing a test which tests the correct implementation of the requirement they already are thinking how they would design the code, the impl. details, and they build the test around that. People have to seem trouble to keep them separate. In the end the tests and the code becomes messy. Ofc you have to couple the test to the production code, but this should not be the core of the test. I'm neither agree nor disagree to TDD, but a lot of times. you can explain such things by saying that people were doing it wrong for whatever reason. It's sad, but true.
I'm a big TDD fan. I've written a lot of messes while practicing TDD. Conversely, there are projects where I've undergone large restructuring which would have been impossible without the reliable feedback from my tests.
"You can't hide a bad architecture with a great interface" - These words will haunt me today.
snappycatchy great!
Yes you can.
people in my old company were experts in that
im a fan of uncle bob, i dont care
This video is fantastic, watching again in 2020 and it never gets old
I think there is a little bit of a disconnect in this discussion. Jim is talking about software design, and Bob is talking about development practices. TDD is test driven development, not test driven design. You can, and should have an architecture layed out to accomplish the goals of the program. Then you write the code with TDD to fulfill the design..
Exactly.
This is the One Correct Answer if there was any !
TDD is bs and fake science, the very fact that people still can't agree what Jesus meant under the last "D" explains it all
you might often meet tdd practitioners designing their architecture around testing :) its quite a sad state of affairs sometimes because of that
"Writing unit tests in advance / TDD allows you to create an inherently testable architecture, because it is already tested by the time it is done" is a common claim made for TDD. Therefore, while it is called "test driven development", it is no wonder people equate it with "test driven design".
Very interesting format. Would like to see more 'debates' like this.
Are we gonna brush past the fact that Jim remembered the page number of the book he quoted from?
That did not escape me.
You SHOULD brush past that, totally, yes, since it is completely irrelevant to the topic under discussion.
yeah what's going on here
That was just a poser move
man's coupled that info for dramatic effect in representation.
I think he means "being professional", not "doing a profession". Being professional is how you do your job.
Bob is speaking clean as he is coding clean. His speech is so clear and easy to comprehend, that dumb can understand him.
Jim is too fast for my taste and is not controlling his thoughts to be simple and clear as Bob is doing.
Bob is a great theoretician. His theories are sadly, very truthful. We developers don't have a problem with the facts from theory but with the appliances of practices to fulfill that theory.
Jim also talks about how TTD reduces velocity in half - this is really a poor argument because with the TDD you actually much better preserve that velocity as the project grows in complexity. It is true that TDD will initially reduce the productivity of the team and for a relatively small projects this might be an issue. For a larger and more complex systems, however, TDD will actually improve the productivity in the long run by helping with the velocity where it matters the most: during maintenance, extension and refactoring.
Uncle Bob fan boy
but you often have to rewrite tests in TDD when you refactor so any crap you right (even tests) you have to drag around
It's a common fallacy that writing more code makes you slower, TDD actually makes you code quicker because you get instant feedback on what you are writing. If your dependencies are loosely coupled, the architecture changing won't impact the tests that much. They are just a unit that you can move around and reuse where ever you want. If you gutted the whole application, then yes they will fail, but they are meant to
@@bmurph24 With TDD, you test behavior, not implementation. Refactoring, by definition, does not change the behavior and therefore should not break any tests. This is the thing that many people get wrong. They test specific implementation which leads to the refactoring friction you described. Tightly coupling unit tests to the specific implementation is also responsible for the velocity degradation Jim mentioned.
@@milosmrdovic7233 Unfortunately it's pretty hard to ensure a test is just testing behavior. I think because the requirements and your own domain understanding changes over time. The interfaces I've created to define that "behavior boundary" (or whatever I've used to define that boundary) turns out to be wrong. That triggers a refactoring which fundamentally changes the boundary and then I end up having to re-write tests. TDD says "this is normal and good" and I suppose it is but it's a lot of work and one of the benefits I was hoping to have with TDD is confidence during refactors, which is somewhat hampered by the fact that I had to change the tests.
I will say this is mostly a "unit" level problem. "Integration" tests have this problem less. There seems to be a lot of disagreement about the distinction, but I'm going to to say integration tests are defined by a library boundary that you must adhere to for fear of breaking clients who are defined as people who are not my team. Most teams seem to use "does it do I/O" or "how fast does it execute" to define the distinction, which I think leads to implementation specific tests. When there's overlap between the two it's easy to look at the unit side and think the general behavior is covered by the integration test, so lets dive into the specifics... which is the implementation.
And then the code coverage tool also says your integration tests missed these branches so it further encourages you to write those implementation specific, mockist tests.
Or if you write tests to cover bugs as you run into them, those tests in theory should verify a behavior but in practice they dive into implementation and will break on refactor.
And it depends on what kind of projects you're working on. UX heavy and I/O heavy projects are very hard to test effectively but libraries that are essentially pure functions are very easy to write tests for. It's actually fun in the latter case. In the former, you fight with selenium, then find out your mocks are out of date or the dockerized version of your dependency isn't acting like its production counter part and so on. Then I try to break my UX and IO heavy project into just functions, but that again is a trap where I can accidentally start testing the implementation. For (trivial) example, writing a test for all functions that could show/hide the widget instead of rendering the widget and verifying that it shows/hides.
Overall, in my experience my tests do reduce bugs, but they also add to the work load on most projects. That never really changes over time. At best its a wash. I think tradeoffs needs to be more prominently mentioned as part of TDD instead of the belief that it will always be a net gain in productivity and that if it isn't its a failure to implement TDD properly.
'If you don't program the way I do, you're not a professional.'
That kind of crap needs to stop.
You can have successfull projects with and without TDD. But with TDD you have testable code.
I think a lot of people have trouble using TDD because no-one said that it is actually hard to master. Learning the steps of TDD is easy, but mastering it is hard and that's why a lot of TDD projects fail imho. It would interesting to know how much time Uncle Bob investing in mastering TDD. Maybe it was just natural for him but the sad truth is, that for the majority of programmers it isn't.
The trick seems to be that "testing every line of code" gives you TESTED code, but it is going to be rigid, brittle, it will break. People have since found that instead of testing every single method directly (as mentioned "at procedure level"), you can instead only test high-level module public APIs (and coverage will come from testing said modules): ua-cam.com/video/EZ05e7EMOLM/v-deo.html&feature=emb_logo
@Oleksandr Kovalenko No it won't. If the tests verify the behavior, then as long as the internals reproduce the correct behavior, the tests will still pass
@Oleksandr Kovalenko There are debugging tools for that..
if TDD is hard to master and can make your project fail if not done properly, I would want to stay as far as possible from TDD. Sounds like a very risky investment.
In a blogpost he mentioned a story about a workplace with people coding on treadmills and super energetic coworkers showing him tdd. So he was taught in privileged environments
they seem to spend most of their time disagreeing about what they disagree about!
i've got more things out of this kind of discussion rather than political one where people just strawman each other. yes, it is long. but you don't cut the intricacies of such discussion.
@@happyuk06 Bob said almost immediately after that it was not his real answer and he was just joking. Then he went on to give his real answer.
@@happyuk06 Hahaha... and I came to the exact opposite conclusion.
Great talk. I want more.
Correct me if I am wrong, but in TDD you are not aiming to hit that 2^32 (or whatever) state-space, but you are aiming to hit that 10-100 state-space of your domain (for each unit).
Sure an address can be in almost infinite logical states (with a few string properties only restricted by memory), but really domain-wise it might only matter if its valid or not. That's the state space you want explored in your units that depend on address.
Then you might say: well interacting units with 10 or 100 state-spaces will quickly blow up to intractable to test, and that's true from a top-down view. That's why units tests (bottom-up) are the essential tests.
Jim frequently mentions telecom systems and driving architecture based on well defined domain knowledge. The biggest challenge in software development is in fact poorly defined and ever changing domain model. Since there is no "magic" architecture thar can protect you against an arbitrary set of new unanticipated requirements, without good unit tests, the process of maintaining, changing and refactoring any complex piece of software is an absolute nightmare, extremely expensive and time consuming. This is by far the strongest argument for practicing TTD.
do you actually need tdd to write unit tests ...? ive met some really great programmers who write their tests after they finish an implementation - they all produce really great code which is also well decoupled and properly tested.
@@robdoubletrouble Bob Martin proposes that you "need" TDD to write tests so you don't have holes in your test suite. Most programmers who write tests after the production code don't write enough tests, so the tests cannot be relied upon when refactoring or enhancing large sections of that code.
@@BlitzMaul The holes are a major problem because they undermine the key benefit of having the tests in the first place: the refactoring safety net - the freedom to change anything without worrying about regression. TDD rules effectively eliminate the holes, IF you follow them without exceptions. They force you to think from the perspective of a consumer of your code thereby increasing the quality of your interfaces. They force you to control the dependencies early which eliminates unnecessary refactorings. They also force you to think about edge cases, where most bugs lie. TDD is not a tool, but a practice that minimizes the number of defects in your software and increases the quality of your implementations.
Now, there are many practices in different professions that ensure certain favorable outcomes. For example, there is a practice that every surgeon undergoes in order to minimize the risk of severe infection: washing hands and using gloves. Imagine a cardiac surgeon who does not wash his/her hands before handling sterile equipment nor use surgical gloves while operating on an open heart. Even if he/she is the most knowledgeable surgeon you know, would you still consider him/her a responsible medical professional? Would you call that professional at all?
But this wasn't always the case. Only 150 years ago in Vienna, the cradle of the European culture, one in every 10 children would die at birth. Surprisingly, that was the accepted norm that nobody paid too much attention to. Then, the doctors discovered that just by washing their hands, they could prevent 90% of those deaths! This is how the practice of washing hands and using sterile equipment was born.
The above example is exactly analogous to TDD in software development. It's like washing hands. It's the combination of competence and adherence to established principles and best practices that distinguish professionals from amateurs.
@@milosmrdovic7233 Emm, not professionalism is when you know and understand your tools, their advantages and disadvantages, I am not arguing TDD is bad, research has been made, and at best it shows that it might be good, and at worst, that it has no demerit, but all papers conclude, that it is inconclusive; which in my book is good. It is a fact that there is no such a thing as a silver bullet to every problem, if you fail to see that, an/or forget that, every tool, is that, a tool, and a tools have purposes, Maslow's hammer is an excellent rule to keep in mind when thinking about this sorts of things. Do not close your mind, and become over reliant on tools and methodologies, there is a reason why every single "perfect" one has changed over the years, a professional, is not defined by TDD, it is defined by its ability to understand the whys, the hows and the tools to solve a problem and pick the best for each specific case.
@@luxiax1836 I have to disagree on that. A tool is an instrument used to carry out a particular function. A programming language, ORM library, Web Framework, or NoSQL database can be considered tools. A practice, on the other hand, is the application of an idea, a method of doing something. TDD, by definition, is not a tool, but a practice or a method. TDD is the best practice for minimizing defects in software.
It isn't perfect, though. The main problem is that it requires people to have skill and know what they are doing. Many people are not willing to take the necessary time and effort to master it. So what are the alternatives? Design by contract is awesome, but as Jim pointed out, the "technology is not there yet". No one in their right mind would rely on automated code analysis to find bugs in their code. What else do we have?
Are there more debates like this? This was fantastic to watch.
@Andre SCOS Only the gods can debate on this level
its immensely enlightening when 2 greats converse.
9 year later, we still need the hero with referals
My takeaway from this discussion is that you first have the high level system design/architecture in place and then use TTD to drive the low level design(domain model) with the system architecture.
That rare 4:3 format 😙👌
This debate is really good, and really high level. The best part in my opinion is the one where the come down to real discrepancy over whether TDD is better than Design By Contract, where Coplien argues from a "lean" point of view for DBC and and Martin for TTD from a "professionalism" point of view. I think Design By Contract seems to better but today it unfortunately is not implemented in many languages. As far as I know it only exists in Java, but maybe im wrong?
what about now?
Only if you know the domain well, the architecture of the system you are trying to model can be successful. Otherwise, its just try and miss (until you learn about the domain :) and everything else, TDD or not, will be a failure. So, I do agree with Jim!
The problem is you'll never obtain complete domain knowledge because of the ever changing nature of business rules. Without unit tests you simply cannot cope with the inevitable software change.
That was fascinating, thanks for the upload!
I get freaked out by such high level professionals discussing about such a controversial topic.
Why?
Hmm, I guess 2 years ago when I wrote that, my knowledge as a programmer was such that this video wowed me and the topic was a bit overwhelming to me. Not that it doesn't impress me anymore anymore. :)
ERROR: redundant usage of anymore detected on Line 2:
the topic was a bit overwhelming to me. Not that it doesn't impress me anymore anymore.
Both are so well spoken
Wait up. Design by contract is great, but it runs at program execution time, so it doesn't stop things from exploding in the user's face. Therefore it is not a replacement of automated tests, which are executed just after compilation. I mean, if you are using contracts, you still need to write tests to catch errors before runtime!
I think Jim Coplien pointed that out when he said that the tooling around contracts isn't as advanced in most languages as it is in Eiffel, where the contracts are checked statically (ie., compile time). So it's not really something can be done only at run time.
Coplien's point is mostly theoretical and probably better, but Martin's is practical and actually useful today in all languages.
I experimented with ESC/Java, which enforces contracts statically via a theorem prover. I felt the potential, but it had too many issues to be practical. I thought at the time that it didn't get you there 100% as there are some contracts that can't be reliably confirmed. I thought they should require a unit test to confirm those unprovable contracts. So, you'd use contracts for most code, and just a tiny few unit tests to confirm the unproveable to the theorem prover.
Great debate and kept moving but offered some light. I’ve seen issue with my own tdd learning to how to plan db and architecture when a team is working on a project. Unless you also have an architect running in front laying breadcrumbs and scoping the work. Can be just a sprint or two in front though.
Why don't they talk about dependent types? You can implement dependent types in Scheme in
Besides this being a very interesting insight into TDD, Agile and Software-Development in general, how about that magic moment starting at 14:18, when they both start to raise (and flex?) their arms above their heads ...
You mean the two male gorillas posing to gain an advantage over the other? That's the weakest I have seen either. It tells you that they are both not so sure about the bullshit they are talking.
@@lepidoptera9337 Yes, exactly like that. :-)
In my (lame) interpretation it looked like they tried really hard to impress/dominate the other... and that (unconscious) show of prehistoric behavior made me chuckle.
... or maybe that is just over-interpretation ;-)
@@weasel7581 No, I think my inner gorilla can still tell when somebody else's inner gorilla takes over his non-verbal communication. I was just waiting for the chest beating and feet stomping. We may even have seen some two-legged upright walking, if the two gentlemen hadn't calmed down a bit. :-)
As this has been years, I really wanted a follow up debate from these two. I flagged them down via Twitter and they both basically told me the debate is over. I've no idea what that means.
So did you ever figure out what they mean by the debate is over?
@@hrmIwonder I think we all lost.
@BryonLape I'm reading a book examining the research on the effectiveness of various software development methods and the chapter on tdd said the evidence shows modest benefit for tdd.. but you often hear that there's more test code than actual code and that doesn't make a whole lot of sense to me. So I think there's a bit of art involved in getting tests at the right level of granularity.
My experience as a team lead working for 10+ years with 100+ software developers/engineers. TDD has not produce better code or had lead to less issues at all. It’s even quite the opposite. People, I met over the time who had a strong position on TDD wrote most times worse code and produced more issues 🤷♀️But at least their test worked 😒
So, I am totally with Jim. TDD helps not to be more professional, it’s a pseudo professionalism implied by TDD. And Uncle Bob is in my eyes a salesman, who sales ideas (mostly from others) for problems which are unsolvable. Like the Fitness industry selling you how to get in shape with the next best fitness trends. Uncle Bob has no real field / professional experience at all. As he said in one of his conference talks and I paraphrase, „I am writing here a little test … then there little code … her again some tests …“
If you want to become a professional in this sector is like everywhere, hard work, discipline, and lifetime learning. It’s not practicing TDD.
@@NorthWeaponCOH I am talking about TDD 🤷♂️ Not tested code 🤔
It just goes on and on and on and on and on for years and years and years and years...
Got to be honest, I prefer Bob's more personable interface. Jim is quite tactical and it feels like he wanted to be right, more than explore the space. I'm ecstatic this lasted less than 30 minutes as frankly such arguments usually waste much more time and accomplish very little.
If you have a language where asserts are present and are compiled and not stripped, the problem is runtime performance.
If you have test cases where you are throwing random arguments against the test-suite, then you are not doing TDD.
Property based testing isn't random arguments, and should inform concrete test cases, each time a break is found, you add another test case. Perhaps you know of some test cases that worked in the past and you write those when all else is failing.
Bob's is more practical, and Jim's argument about recovery seems a strawman. Recovery might start as a class facade, but eventually can only either be parked or extracted to a separate system.
Jim Coplien totally nailed it! Both guys really rock! I agree that design-by-contract is really useful paradigm. I started using the D programming language precisely because it supports contracts and class-invariants...
2 years later, how is it going? Have you stayed with D all the way without looking back, or are you doing something different now?
@@nandoflorestan Yep. D is my preference when I want to "go native". :)
20:20 Bob: who is it who first used DD with some letter in front, we have CDD, BDD, TDD?
Jim: dd? Well, dd was a Unix command for disk dump.
:-) made my day!
2 million lines of code being pretty small is a very smug remark. Working on projects with hundreds of millions lines of code sounds like gloating. If you have that much code you probably don't have that good of an architecture. I'm surprised Uncle Bob didn't call him on it. Uncle Bob looks much better in this debate, he's calm and humble.
He also said C++... if you want to create programs that work don't write in C++ especially if they are 100 million lines of code? That doesn't sound possible...
it's c++ not python...
In a way, it appears both men are talking around the real problem - Inexperienced developers.
As Bob notes elsewhere, a good 50% of the developers out there have less than 5 years of experience.
I have taught programming in post-secondary for 20 years, and my general thoughts are that we're the ones who have failed in improving how and what we teach students regarding programming. I could say more about our role in creating and perpetuating the problem, but that would be a looong post.
maybe because developer's salary is to low
I see many "Inexperienced developers" with many years of "experience" write shit for 20 years and you will be very good at writing shit
Not to mention relying on Google search and stackoverflow, then copy/pasting without understanding.
One more thing about DBC that's powerful ... its contracts are inheritable! There's no such concept in TDD! You could do both since the two don't conflict, but why? You can do something that's leaner and more effective ... DBC. As the video mentioned, with TDD, you expand the code base dramatically, and introduce more bugs! Also, in TDD, where's the information that shows at a glance the limitations and expectations (contract) that one should have of a method or class? It doesn't exist.
8 years later and bad code has grown by more than 10 fold.
Loved this debate. Didn't seem like they actually disagreed on much in the end other than minor preferences. Jim prefers contracts and Bob prefers unit tests, that seems about it, but both have great merits.
Design by contract sounds interesting with its broad coverage of "tests" on the entire range of arguments, something that tests can't do. But I still wouldn't want to ship my code without at least end to end tests and integration tests, to actually see the code is definitively working as expected, at least in particular tests. I think unit tests are important too, as long as they're done right so they're not brittle or not actually testing anything.
I don't think they prefer it. I think both of them wants to choose the tool that works better.
Eye pleaser to watch this debate! So much info in 20 mins...
And not much that you want to take home because most of this doesn't work in practice. I mean, sure, if you have to work on a domain specific software with a team that doesn't actually understand the domain, already, then, yeah... but that will blow up in your face, anyway. This level of advice is like asking a person who has never seen a grenade whether they want to pull the grenade pin with their left or right hand.
To me Coplien won this debate. Its a pity what Agile has done with the industry nowadays. Such deep thinking debate is unthinkable. Just STFU and ship me some code... whatever it is... thats the level of todays IT industry.
Just an observation. I get the points both of the guys are making. Full disclosure, there are very few disagreements between with what Uncle Bob proposes and what I would encourage. My main comment is that I believe the adversarial, who is right and who is wrong is not where we want to be. If tou disagree with me, that is a positive, we need have the seed of a conversation.
14:18 Uncle Bob: [Professionals] "practice TDD."
Jim Coplien: "A professional to me is a person who makes money for doing a job in that area."
Poincare once said "Sociologists discuss sociological methods; physicists discuss physics."
Now guess who's the sociologist and who's the physicist.
Jim brings up "Professionals" to counter Bob's argument about "Professionalism" (equivocation fallacy). You fail in the same way by misquoting Bob with your bracket summary [Professionals]
@@colerees3965 Despite using the wrong term, this is like trying to win an argument by being pedantic about terminology without actually acknowledging or addressing the argument. This is childish unless you know the person and are just joking, that is, not taking the discussion too seriously.
I'd rather have messy tests than a messy legacy code. You could improve the project by writing production code and fail the unit tests then improve the tests.. The tests will get eventually better. But if you only have a messy code in front of you without tests, you really have nowhere to start. Every time you touch the messy code, it will become even messier.
How true. I find myself on a contract like this at the moment. Any unit testing would have been welcome.
My approach with poor legacy systens is to write contract and system tests then work down to unit tests, often rewriting the production code. The contract tests then tell me if something broke.
15:44 the definition of unit test that Jim had is kind of flawed. A unit test is just a test of a component in isolation. Which is useful to ensure that certain scenarios can be quickly confirmed by a developer and the build system if it is automated. The API testing he is doing is more of an integration test because you're testing against a black box. In that factor I also want to have them done because they can bring up issues sooner than later.
17:55 Eifel does those checks at runtime there's another language that does static compile time testing called EVES which I used in university.
19:33 the problem with the coupling of tests and assertions in code is the cost of running those assertions at runtime. Though it may be moot if you can afford those high powered AWS instances. Not only that, there is going to be a cost to handle those assertion failures.
The argument for black-box testing over white-box testing is becoming more and more prevalent. ua-cam.com/video/EZ05e7EMOLM/v-deo.html&feature=emb_logo
@@Zhuinden my view is you need to do both. The white box tests allow for rapid tests during CI builds.. You don't want to wait 2 hours to find out that your test fails.
Secondly with whitebox you can mock lower level constructs which you cannot really do on integration tests as easily.
14:18 pay attention to the body language. Jim exposes his chest assuming he's comfortable, then Bob raises his arms trying to show himself bigger, and Jim takes a normal position, followed by Bob. Very interesting how they feel they're debating a critical topic.
Later, see their legs, Jim trying to show himself as the alpha male and Bob keeping an interviewer position. This is a fight between attitude and knowledge.
I personally think Jim took the TDD idea wrong. One of the key pieces of achieving a good design writing the test first is, precisely, to define the contracts.
that sneaky remark on SOA was awesome 😆
Great video.
A really good eye-opener for me about TDD is this video : ua-cam.com/video/EZ05e7EMOLM/v-deo.html
Very well explained what Kent Beck vision actually is :)
Most of the people don't even know what a unit is and what TDD is all about.
TDD is about testing behaviours, not a method, not a class.
I think Jim and Bob are disagreeing mainly because of that different point of view : what is a unit under test?
14:31 is he trying to scare the other guy?
I laughed hard when Jim also raised his hands lol
Looked like some IT version of animals fighting for females
Jim was talking about the big macro, Bob was talking about the small micro, they're not actually against each other, but mostly debating different areas. But like Jim, I find it hard to accept TDD as a religious dogma, you write 2 million lines of production code then you also have to write about the same LoC for tests. If you work for a small startup that has to deliver code in a fast project (3-6 months), Bob's TDD is like committing suicide, the primary requirement for non-TDD development to work in a setting like that is that you have to hire experienced programmers that can communicate in the same frequency. There will be less bugs in TDD of course, but as Jim said, you also cut the velocity in half.
Great architecture at front is like a monarchy (they do evolve but incrementally), disciplined TDD is like village-level democracy all the way up to the central system. They don't have to compete against each other. If there's anything in the debate here, it's the focus of attention. When it comes to implementation they're both agree, write the skin, not the meat, let the meat evolves, you write interface but only with meaningful abstract methods that inherently describes whatever that interface is, be minimum, no speculation in your architecture, object, or interfaces, just the thing you really know.
As we also have to admit that Uncle Bob also made some very good points.
Excellent discussion.
How on earth can Jim actually remember the page where guy X says Y. wtf 5:55
Irrelevant to the discussion and distracting - not good in any way
I've seen that it's Mostly the younger guys who resent TDD and the older once support it.
Experience.
skill issue
personally use both contracts and TDD, the problem with contracts is the contract not tested until its run, I also like fail early which contracts on construction give you
my main reason for using TDD is a lot of your code can then be forgotten about (until the next refactor and you will refactor) and the more complex stuff concentrated on
Looks like Bob's stance is to hold as close to pure TDD as possible, because he understands there is going to be some leeway given to experienced Architects/Teams to understand that the Architecture is fluid. Jim is just more comfortable to express that you have to more frequently and initially look at the Domain and build some Domain objects. Not code, just objects, properties, ctors, contracts, and their relationships. I still side with Bob, mostly because I know that wherever you set the bar, the flag will wave 30% either direction. So set the bar high, and let the people "under" the bar still actually be within reason. But both of them are correct in multiple respects.
Thank you for this, very enjoyable
I, very much, would like to know Jim Coplien's opinion on SOLID principles.
They were saying that the test mass is about the same as the production code mass ? What ?
My experience is that the test mass is bigger, if not it means the code hasn't been tested enough and misses conditions.
About lean, I've found that if the test code also isn't lean then at later iterations of refactoring you run the danger of having to spend as much time maintaining the unit tests as the code.
I remember having code mass < test mass in a project, but that was mostly because it was extremely critical to get it right. It was in python 2.6 so we needed to test even third party modules (even standard modules had methods that just returned without doing anything, goddang bonkers)
@@kbrnsr I think it also depends on the sophistication of the test harness. Modern, expensive tool-chains write and maintain much of it automatically creating test code from scripts. I'm stuck with Nunit which is pretty basic.
@@nickbarton3191 I thought new stuff like xunit would be backwards compatible with nunit (no experience with .net programming)
Also I found a blogpost from Bob Martin's Clean Coder blog from 2013 (blog.cleancoder.com/uncle-bob/2013/03/06/ThePragmaticsOfTDD.html) which had this line that felt pretty relevant to what I was trying to do:
--- I usually don’t write tests for frameworks, databases, web-servers, or other third-party software that is supposed to work. I mock these things out, and test my code, not theirs.
--- --- Of course I sometimes do test the third-party code if:
--- --- --- I think it’s broken.
@@kbrnsr just haven't the time to change test framework, not sure if it will give any significant advantages. The tests are deeply coupled to the legacy code which is bad but as I'm steadily refactoring I'm also refactoring tests too which are getting cleaner. They take too long to run for unit tests, I'm starting to run subsets of them.
None of this is easy on a legacy project which wasn't well designed. I started with a file of 8000 lines of just global variables, now down to 160 !
I wanted to buy VectorCast which creates mocks automatically and has a UI for test definition. I notice in Bob Martin's article he says he creates mocks by hand.
Jim Coplien nailed it with pragmatic arguments
Coplien is stuttering, contradicting himself, and running the conversation off into all kinds of other areas, all classic signs of someone who doesn't have substance to their argument. The part about "Oh but Eiffel..." is particularly laughable considering he mentions he uses C++ for most things. Does he even use Eiffel?
Really? I didn't hear that. Instead I heard a bad argument about banking accounts, where he presented that only one user of the account was to be serviced, and so no other user of the account was serviced. Anyone who takes the "simplest" thing not account for 2 users has made a mistake in requirement gathering, not in the mechanics of how the code was written.
it seems to me that design by contract in eiffel force every object, every procedure to have dependency on its preconditition,postcondition, and its invariants. jim thinks it is necessary coupling while uncle rob thinks that it needs to be decoupled from production code. i'm with jim on this one. CMV
I definitely agree about Design by Contract. It's definitely the way the go. Assertions/contracts ARE part of the design so I like to see them right with the code. With that said, you can emulate DBC features in an OO language just fine. You just have to follow certain conventions just like you can write clean elegant OO code in C (not ideal), although you have to build the OO facilities on your own.
Nice to see Hans-Hermann Hoppe debating software things
Great talk
I want to address one point, which is that you code faster following TDD, because you get instant feedback as you write. If you write tests at a too high level and you have many complicated relationships, it can be a nightmare to track down what is going wrong and where
I admire Jim but I think he is wrong on this one. There is no reason to consider TDD as an obstacle for a good design, by the contrary, many times TDD raises hidden design problems to the surface.
the only thing that jim wants in order to use TDD is more domain knowledge of the system you want to do, if you start writing a bunch of unit tests without this knowledge you are going to tied yourself into an arquitecture that may be right or may be wrong and if it's wrong the way out of it is going to be painful to do
To me it sounds like claiming practicing TDD completely ignores the domain knowledge. Of course it doesn't. Also, you do have to have an initial idea of the system architecture. Otherwise you wouldn't even know what test to start with.
compilers are not shipped with unit tests. that's the end of entire talk :)
eight years ago… during the Great Pixel Drought. harsh times
"You are not a professional unless you are doing TDD"... How ridiculous. Thank you, Jim, for not being bullied into agreeing with this. That's the kind of egotism that hurts the industry. If it doesn't work for you then you must be doing it wrong. There is no magic bullet that works BEST for all use cases. Study the craft and figure out what works best for your project and team. It may be TDD and it may NOT be.
If you actually watched the video you would realize that he was joking about that definition :) (Check 14.42)
Abhishek SP, you're confused. He did say multiple times that a developer isn't professional if they don't use TDD. For example, see 0:43.
(1/2) Seems James is a bit too flippant about unit testing. The key is you MUST acquire domain knowledge, otherwise your unit tests are going to be a waste of time in a project of any reasonable complexity. If you apply domain knowledge (often that's outside of CS) you will have a good idea of edge cases and exceptions, so running a million parameter variations to test a vast space of 2^64 settings or inputs or whathaveyou you will be able to catch the important bugs, provided almost all of the rest of the untested parameter space is "typical". In most domains that's true.
Watching it in Jan 25, 2022
Basically when you are developing a system with top-down architecture design, doing TDD is not that much useful, at the start, since you need to devise what relevant objects and business units you have to use. And then based on SOLID principal figure out how to divide responsibility for these units. Afterward unit testing would become logical.
+David Gibson "top-down architecture design" doesn't exist. It is just "top-down design"... and certainly it makes no sense to write tests on the top of the mountain if you won't be able to run them until the project is complete...
So the question should be "do we use top down design or bottom-up design"... and the answer is both, a strategy which is also called "meet in the middle", like the tunnel between France and England.
The idea is that in the bottom-up code you can write all the tests you want. In the top-down design you map the functional requirements into design decisions which are then translated into top level code and you decompose it further into smaller functions. If the design changes, it is the top level code that changes, but the lower level code remains. If you have coded most of the project, maybe you will have to throw away 50% of the top level code...
But since the code is a pyramid... the top level code is just 25% of all the code... and even if the requirements change and half the top level code is wrong, that is only 12.5% of the total code...
And the new code will look like the mountain where the Grinch lives...
TDD is useful for top level code because it must test major functionalities...
gold.
Jim Coplien think's in business Uncle Bob think's like developer which is good ---> both diferents points of view.
I don't think Coplien knows what TDD is. There's nothing keeping design from changing because you wrote your code using TDD. Your APIs always come out better when you use TDD, and architectures evolve. Listen to Uncle Bob.
I don't think that's his main beef with TDD, I thinks it's more about "unit tests not adding any business value while reducing velocity" part which bothers him the most
I think you don't know who is Jim Coplien.
Go back to school and listen ...
Maurizio Taverna True, I don't know who is he. I know only one book: Lean Software Architecture for Agile Software Development. And it was a painful read with many fallacies in order to prove his point.
I'll break it down for you. TDD is not a testing methodology. It is an emergence design tool that drives fundamental decisions about architecture based on simple assertions that could very well be false.
When he says "Savings accounts are not objects," it seems to me to indicate that he doesn't understand what an object is. Yes, a savings account has lots of collaborators, but that doesn't mean it isn't an object.
Well, both the ideologies do not fit in different ecosystems. For eg, TDD is easy to do in Java than something like IBM Int Bus.
Put 2 programmers long enough in a room together and they will fuck each other up. PDD - pizza driven development
FIGHT! FIGHT! FIGHT!
It looks like a hero war to me :D
did Jim said, two million lines of code in half and hour?
It was "starting to spec out all the interfaces" behind a system of that size. I'm paraphrasing, but it was similar.
I mean even in that case.. damn.. I feel like a script kiddie. Thanks youtube.
Starting at 12:10, Bob's question was precisely "How long will you spend before you started writing executable code, on a system that will eventually be around 2M LOC?", and Jim's answer was that, considering it's a telecom system. he'd start by making the constructors and destructors to mark relationships between the objects, and writing a test to make sure the memory was clean at both ends. That would take him half an hour. Not 2M LOC. Hope that helps!
Great to c 2 minds debating on a controversial topic. Although, I do not fit into their league , I must say uncle bob's theory perfectly suits for backend models and processes - may be a B2B. and Jim's B2C..lean validation..no point in building a great architecture when customer doesn't want to buy it.
The other guy than Uncle Bob is just repeating random stuff that come to his mind. His points are not much TDD related from my perspective...
By doing the stuff upfront as described by Jim here you are calling a telephone system a telephone system upfront surely? And the developer/architect is not the actually telephone engineer that understands the system fully upfront. Saying that it’s not that I disagree with Jim, what I’m saying is capture that stuff upfront in lightweight visual docs that can be modified if needed - just don’t write the code upfront. The architecture upfront is definitely required but implementing it down to the interface before any “coding” is a slippery slope. Ideally the architecture will describe the connections - like a city planner but not the actual detail of the building. The only way Jim’s argument makes sense is if the architect is the actual telephone systems engineer and 99% of the time this is never the case.
Programming by contract does not eliminate the need for automated unit tests :/
Coplien seems to never have heard of DDD. Architecture evolves. In order for it to evolve you need to keep learning the customer's language. Obviously, as your understanding of the domain evolves, your representation of it in code will evolve too. But that's a gradual process. And that gradual process becomes a lot less painful and a lot faster if it's supported by a massive unit tests library. Good luck evolving the system at a similar pace when you rely on a similarly comprehensive library of higher level tests.
Coplien's hit and miss vs complete coverage argument is a straw man. Contracts are a very interesting concept, but in essence unit tests are a more flexible and real, usable tool to do the same things. Your classes themselves not being traceable to business domain objects denotes bad architecture. As long as your code follows the business domain, as DDD says it should, your unit tests will definitely be traceable to business requirements.
TDD is design by contract. You just 'discover' the contract as you go along. It is the ultimate anti YAGNI.
That's not what a contract means.
Still not sure how it's: tests=contract. I get that they're examples, but then if that's the definition, shall we just say any code that interacts with it is a contract? I feel that's a bit too loose.
If you don't need something why would you add it in the first place?
I have the feeling people start TDD because they don't know what they want to accomplish and then they start to build up the software using tests and in the end they get a lot of things the don't need.
From my experience I can say that a lot of people are forgetting the domain, which should be the base for TDD.
if you don't know what you want, and because you think TDD will waste your time to create a lot of things you don't need so you directly write the production code. Then you probably will get a lot of PRODUCTION CODE that you don't need neither. :p
The problem is: we need to figure out "what we really want". And I think TDD is a great tool for helping you find that answer.
I always ask this questions about TDD ;
How can we make sure that the unit test its self is correct , how do we detect a bug in a unit test
Assume some one wrote a test that always passes , how often are we going to be tempted to revisit this test because we recognized that it's always passing ?
In my experience we usually revisit a test , only when they fail . This is why I personally see TDD as a little bit of wast of time & resource ..
3:41 data is one dimensional
two giants going head to head
C# has had it available for a long while. Code Contracts.
@@INTPnerd Eiffel has it baked in; C# has developed just as a reference.
"2 million lines of code? I consider that small."
I test so much my code barely makes it to production.
This joke saddens me.
100% unit coverage seems artificial to me, I'd rather have good coverage on all components, about 60% on plumbing stuff and finally a lot of black box (end to end) tests. I don't need an exact unit test to tell what is wrong, the e2e tests will show that and quick debugging would lead to the issue. 100% would just make you constantly rewritting tests for any small changes without much benefit.
Almost everyone in the comments is a senior architect, according to themselves.
its normal for people to criticize software development, because its a creative process.
@@azerty8866
It's normal for people to think their better than they are.
Jim is a C++ programmer, but Bob is mostly a Smalltalk programmer? In dynamic languages its easy to turn REPL sessions into tests, and its easy to mock out code, and tests are simple enough that they can check more errors than they introduce. In C++, it may be more likely that your tests themselves can have errors than the code itself you are testing. Also 100 million lines of code sounds ridiculous, you shouldn't need to build such big applications to make any money : you can make GUI operating systems in Smalltalk in only 1000 - 5000 lines of code.
Bob is proficient in many languages, both static and dynamic. A lot of his Clean Code series is done in Java, and he talks about past experience in C++ as well. I have never heard him give an example in Smalltalk, so where are you getting that he's mostly a Smalltalk programmer?
At their level TDD or no TDD they're still writing quality stuff.
He went from
"It's unprofessional to not practice TDD"
to
"It's unprofessional to ship code that have not been unit tested"
Who is this guy? 😂
James is being far too polite calling out this guy's absurd claims. Bob says shipping a line of code that has not been UNIT TESTED is irresponsible. He then immediately suggests TDD can prevent code being shipped that wasn't TESTED. It's not a coincidence that he drops "UNIT" in the second half of his sentence. That is a tactic employed with great effect by religious fundamentalists: child-like conflation of topics. Yes automated testing is often essential to success; no, you don't need to isolate every class. As James says, bugs occur between objects far more often than within them.
bob was always a snake oil salesman
TDD always seems to me that its just developers wanting to write code right away. Instead of thinking through the problem first.
And you dont need to test every line of code with a unit test. What about GUIs ? Even just running the program is going to "test" most of the code by executing it at least once. Thats why most unit testing is waste, you test things that cant possibly fail if they have run even at least once. No point in ever running that test again! And you could probably easily say yes 20% of the code in a project get executed through its entire path, just by running the software at least once
Have you ever walked into a project, knowing nothing about it, but being responsible for maintaining it? It is so much easier when the code has meaningful unit tests than when it has no unit tests. Unit testing gives a glimpse into the mindset of how the code was developed, and what things the developer expected to go right and what could go wrong. People who write code, throw it over a wall, and move on without writing unit tests don't appreciate the next person who has to maintain the code.
TDD is about maintaining quality code over time, and this is where Uncle Bob and Jim Coplien disagree (in this video) on their definitions of professionalism. Professionalism is not about whether you make money doing a job, it is about the quality of work that is performed and how to make sure the client is actually saving time and money by using your services rather than just getting it done quick and dirty.
Most inexperienced clients, who are new to having systems built for them, only care about a delivery date and initial project cost. They have a static mindset, because they are probably looking to sell the project and move on to the next big buck. I've been in startup-land for quite a while, and I see this is trend. The professional developer thinks about extensibility, performance, maintainability, and scalability, and how to avoid complete rewrites of systems. Writing unit tests is one of the ways to accomplish that.
I do agree with Jim, sorry Bob
Any developer that treats developer techniques in a religious way is not only wrong but dangerous. The answer to almost all programming questions from intelligent developers should always be “it depends”. Saying that writing unit tests in every situation is obviously wrong, without even looking at the specifics. Uncle Bob has obviously drank too much of his own kool aid.
Watching these two cross legged gurus while I am crossing my legs as well 😎
It's 💯💯 clear Jim doesn't understand the TDD Workflow / Approach. Most likely he's only read about it, heard people talk about it. Because if he actually tried it long enough and truly tried TDD, his arguments would not even be brought up in this conversation because he'd clearly see that those arguments are BS that he's brining up against TDD.
None of the arguments he's brought forth in this talk has anything to do with TDD causing issues with what he is claiming to surface. T
TDD does not cause bad design. Nobody says you can't think a bit of design up front before you TDD using things like CRTs, etc. That's a bogus claim by Jim. You do think a bit on the whiteboard, even before you TDD at times. And TDD does not cause bad design. YOU cause bad design when you apply whatever other practices during your TDD workflow (DDD, Design Patterns, Clean Code, Database Design...your BRAIN). TDD does not tell you how to design. It guides you only. It gives you great feedback on your design, on if it works, and much much more.
Might be a smart guy. But totally clueless about TDD.
Bob Martin is totally right on "Real professional is someone who practice TDD". I would never let go a programmer in my project/company who does not practice TDD. At least he should write unit tests even after production code. It is such a valuable thing that "forces" you to write beautiful code.
TDD and the love of it does not "force" you to write beautiful code.
Like Jim states in the video. I have seen so much garbage code and tests that are written due to "TDD".
From my experience TDD lead to garbage code when a) you focus too much on implementation detail in the test and b) you forget to refactor when you have to.
I could just say "a fool with a tool is still a fool", but in the end most people have trouble with TDD because it has a steep learning curve. TDD beginners usually/hopefully think about a proper requirement given by the domain. But instead of writing a test which tests the correct implementation of the requirement they already are thinking how they would design the code, the impl. details, and they build the test around that. People have to seem trouble to keep them separate. In the end the tests and the code becomes messy. Ofc you have to couple the test to the production code, but this should not be the core of the test.
I'm neither agree nor disagree to TDD, but a lot of times. you can explain such things by saying that people were doing it wrong for whatever reason. It's sad, but true.
A true professional writes unit test, but not necessarily using tdd
I'm a big TDD fan. I've written a lot of messes while practicing TDD. Conversely, there are projects where I've undergone large restructuring which would have been impossible without the reliable feedback from my tests.