What's worse than TDD is an extremely opinionated colleague who judges you by your lack of interest in TDD...yet production is full of their code with no TDD
For me the "problem" with TDD is that it amplifies the advantages and disadvantages of just testing. And there are MANY bad practices in tests. The most common one I see is testing implementation rather than behavior (that alone is the source of 90% of the pain of testing and refactoring). If you have bad testing practices, TDD will make everything worse, but if you have good testing practices, TDD will make things better.
@@kelvinwalter8623 Not testing the implementation. Trying to keep tests declarative. Covering boundaries of the inputs (test min/max and +1 beyond). Checking that failure states are predictable. Try to write tests somehow convey intent because code often can't.
@@martinbecker1069 And BDD got co-opted by the cucumber/gherkin people even though it doesn't need to be married to a silly requirements specification.
I'm in the TDD camp, and I rarely write unit tests. I feel like unit tests are a bit of a straw man here, they are a terrible candidate for TDD for the exact reasons you gave: when tests are small they are difficult to refactor. You are coupling tests to functions/files/classes and if you ever need to move logic out of these places then you also have to change the test file which slows you down. TDD is better when you're writing integration tests: all your test should care about is the inputs and the outputs, so you're free to design and change the code however you want during refactoring as long as it gives the correct output from the same input.
@@JChen7 I like mocking, but can understand why some people dislike it. It's a tool that's easy to abuse. If I want to write tests that test my whole application, but I don't want to test external dependencies (like database connections, s3, etc) I like using mocks. What's important is to mock as little code as possible so that you are only mocking the external dependency but are still testing all your own code, and you do this by pushing dependencies to the boundaries of your application. For example, if you make a database connection, make sure only the logic necessary to do that sits in one function and only mock that function in your test. A good way to practice doing this is to force yourself not to use mocking libraries, but instead abstract your dependencies behind an interface, and then create a service that implements that interface which you use in your tests instead of the real thing. When constraining yourself this way it is difficult to abuse mocks and can teach you good habits in pushing dependencies to your application's boundary, although It's important to be wary that you are adding extra abstractions which can create a different set of problems later down the line.
I tend to write code twice: the first time, I just play around in a scratchpad repo with spaghetti; the second time, I structure the code more sanely and add tests. The second iteration can be TDD because by then I've figured out what I want the code to do.
This is actually still TDD which is a big thing no one understands (since I don't think anyone actually reads anything about it). You are describing a "spike".
I'm pritty sure that he thinks that when you have small unit tested units, the refactoring will almost always change the interface of multiple units and that will brake the tests for these units.
One thing I really hate. 19:00 Is when people call surveying or observational studies, Scientific. They are inherently not. Science needs a hypothesis and your test of the hypotheses can't have a pre-known outcome. This is really grinding my gears given that test driven development needs to have the test written first. Now, it's good to get collect data and to write it down even if you don't have a hypotheses and it can find truths. But it's not Science. Science doesn't equal getting data.
0-test development is the best XD.... 1. i can ship project faster.. 2. I can get more money from client for fixing more bugs.... 3. but fixes create more bugs, means more money... and of course job security...
@@NathanHedglin Not all bugs are new bugs. Some bugs are recurrent bugs that reoccur when you make changes. Some changes can affect other parts of the code
Refactor part of TDD is also refactor of your design (not just implementation of your function). So yes, when doing TDD you throw away tests. Tests are a way for you to learn how to design, not just how to implement your function. This means, part of the work is throwing away all your code and your tests. Just like when you do it without TDD and you throwed your function away because you gained knowledge. Summary : Refactor = refactor implementation + refactor design Refactor X = create or update or delete
You know, to do refactoring efficiently, you want to have tests that you do not throw away, such that you are sure that things keep working and the steps you took worked out right. If you are throwing practically all those relevant tests out, then you are not so much refactoring as you are throwing out the old and starting almost from scratch again.
@@sorcdk2880 Someone removed my folloup comment. Just apply parrallel changes refactoring pattern. and once you are done, remove the old impl and tests.
@@sorcdk2880Well, you'd be doing that without TDD also. If you're throwing away full functionalities and moving away from the original specification, then with or without TDD, you're changing everything and essentially starting from scratch anyway. I think the issue here is the misuse of the term refactoring, if you're changing out everything then you're restructuring, not refactoring. No matter what you use or whatever your methodology is, you're changing out everything anyway. Throwing away useless tests at this point is irrelevant regardless of how you implemented them.
@@ryanbeatbox Not exactly, outside of TDD then the timing of it and the way tests are designed and their purpose makes it such that you can often get around this problem outside of TDD.
When working on projects with other developers I'm significantly more concerned when JS/TS programmers import 600 moving target dependencies maintained by thousands of strangers than I am whether they wrote tests for their 20 line function.
I think Ian Cooper's talk about TDD addresses most of the problems with TDD better than Dave's video and most of the points that Prime raises as well ua-cam.com/video/EZ05e7EMOLM/v-deo.html&ab_channel=DevTernityConference. I personally do not like TDD that much, but I think it is extremely useful when you want to design an API/a tool that is supposed to provide some kind of a service to third parties/other services in-house because I find it easier to design the tool itself when I have to make black-box assertions about the tool's interface in a way that this tool's users would make, and it is really nice to have those assertions in the form of tests.
I also dislike TDD, but this talk by Ian Cooper is the first one that explained the idea clearly and made me think about the usefulness. The main point is not doing it at the unit test level. That is too low, wrong, and not valuable. Prime you should see this talk!!!
@fetherfulbiped Ian Cooper's talk about TDD is one of the best ones, addressing the problem that people think you should do TDD to test-drive class methods. Farley is repeating himself in many of his videos, due to commercial interests, however I think he is an excellent talker. Proven by this video which describes TDD very well. ua-cam.com/video/ln4WnxX-wrw/v-deo.html
I think if you have clearly defined requirements of what you're trying to create, then TDD makes sense. The problem with TDD when the above is not true, is that you spend time writing for something that actually isn't needed
I think TDD forces you to write code that is easily testable. I've seen a lot of code written without testing in mind, which made it very hard to write good tests for it.
@@streettrialsandstuff that's only for being able to use test doubles. For making sure that your test is simple and relatively short, you have to write it together with the production code, otherwise you neglect your test code OR are a god.
Prime... I think a unit is meant to be a public function, or similar. A private is an implementation detail. If you think this way, then you can refactor any way you want, including breaking up into private functions or even a class. Doing this means you won't necessarily have to specifically test the newly created class, but of course you can.
Exactly; a public function (for unit / integration tests) or a public feature (for functional tests). The tests you write should help you refactor; not slow you down.
I'm still in school and they don't really teach us about testing in class, so I don't have much knowledge or experience, but from my intuition this is what I assumed would be the case, so I've always been confused whenever Prime starts talking about testing and this sort of topic comes up.
Based on your statement, I think it is fair to say the unit, aka public function or like, has one or more supporting private functions/methods? So, if one private function involves complex logic, how do we test the private function independently? So, unit tests are some kind of integration tests as we test private functions through a proxy: public function?
I'm doing TDD for 4 years and I love it. I absolutely love it. In process of adopting TDD into my workflow I was so frustrated. I dropped it 3 or 4 times before finally adopting it. There are a lot of gotchas and you are better getting a mentor to resolve the confusion instead of stumbling onto those by yourself.
You should know that TDD doesn't produce good designs only mediocre. To get to good designs you should redesign after every large increase of code size. Beside many "small" unit tests created by TDD are just useless and you should delete them.
@@markonovakovic3838 The most overlooked advice about how to properly do the TDD is advice on deleting useless tests and I believe this is the main reason why TDD is just not that great in practice. Look what inventor of TDD Kent Beck did in his famous book. He deleted the testFrancMultiplication test because he wrote larger tests that fully covered the functionality of this smaller test. There is no proper TDD without deleting useless tests.
What does it mean if all tests are green? The only correct answer is "Nothing" it means literally nothing at all. It doesn't mean your code is good, or clean, or safe, scalabale/maintainable/whatever--none of this. It only mean it satisfying the current state of tests which itself is a subject to change.
one thing i would say for everyone who, like me, finds TDD impractical, is that compiler errors are tests. if you're writing in a compiler that flags warnings as errors, those are tests.
Also, RE: skill issue and/or it takes time to "click" and/or "you can't get it if someone simply forces you", when I started introducing TDD to my non-backend-lovers juniors they absolutely loved it almost instantly (beyond admittedly a bit of initial "seems tedious") ; the boost in confidence they get from not only testing their code but being relatively certain (since the test failed initially) that their test actually exercises their implementation (... and that they didn't break anything else) is massive!
Honestly I find it hard to even get people to write tests at all, if they have never written them before. There always seems to be a period of "why am I writing the code twice" or "this is just for a code coverage stat" before it really clicks that they should be testing for the behaviours that they want, rather than testing the code does what the code does. Once that clicks, writing tests is great, but I think it does take a bit of time before it clicks, and then TDD is even more difficult because you need to completely change how you write code, and if you've been writing code for a long time, that's hard to do.
Front end TDD is the biggest ball ache though. We’re developing an app and are forced to do TDD. I’ve been doing it for a year now and it’s just a massive chain having to write the tests up front. Frequently we get feedback from marketing and users that they don’t like something when we initially send something through and have to completely change it. Also UI changes before feature deliveries are fairly common so all that work initially on writing those tests is a massive waste of time. Literally wanna tear my bloody hair out doing TDD for ui that can change frequently. Love testing but seriously hate TDD. I think it’s a waste of time to write your tests up front.
@@tanotive6182 Yeah generally speaking I don't think I'll ever try to test frontend code ; that's a job for the QA team, aside from some very punctual (extremely rare) complex logic I might wanna unit test to get right. EDIT: OTOH I completely disagree with you about blaming TDD on this ; the blame falls squarely on TDD misuse - I don't really believe in testing frontend automatically as I said. (EDIT2: An extremely basic reason for this is that most UI testing tools will be able to click a 0.25x0.25 pixel button that would be entirely impossible for a human to do... I don't see the point)
@@tanotive6182it sounds to me like you're testing the UI at too low of a level, likely testing implementation rather than behavior (which is a common problem). For one, it sounds like your team has a problem with knowing what to actually build. The UI should usually be the first thing understood since it's where the testing should start whether it's TDD or not. Second, I'm struggling to see how the change is the core problem even if you're testing UI implementation. If you change from radio buttons to some kind of fancy collapsible selection, the level of abstraction should be the same so in the test you change "selectRadio(element)" to "selectFancy(element)". Unless you're talking about the difficulty of implementing "selectFancy()" compared to the radio version, I don't see how this is a testing problem, much less TDD. It sounds like a misuse problem, like others mentioned.
This is exactly why I always have 2 kinds of branches. 1 is for exploratory, experimental or prototypical stage (exp). Another for production code. These 2 are separate and have different goals but ultimately the end game is to deliver correct code. The exp branch is built in a fail fast manner to explore the problem space. It also includes exploration on how to implement potential solutions and how to test them. Sometimes we don't even know how to to build the test harness so this is the time to explore how one would go about it. The exp doesn't have to solve the problem to the end, it just has to establish the framework for means of solving the problem and the proof of concept on how tests might be written. Once a certain level of confidence has been established and presented to the group then we proceed to the next phase. The output of exp is then rewritten test first with tdd in mind in the production branches. The tests can be made in bulk at this time and one after another the implementation will follow. What's even funny is that sometimes QA team who works on test suites also have designed so many tests already that a separate team writes these tests in to code just to catch up with the amount of tests while implementation is ongoing. They are peer reviewed according to production quality standards. Sometimes the problem space is well known enough and the means to create test harness for them is well established don't need exp branches and can be worked on directly in tdd. So I would say we do xtdd approach. explore experiment or prototype when needed then do tdd for production code. One can then iterate features on production quality branches given the dev understands the problem space and the means to test well enough otherwise they need to explore first. For me I think it's about being practical and to actually give people the chance to understand before requiring rigid standards.
@@suede__ I totally agree with you sometimes projects do not have enough budget money, time or human resources and In those cases people left right and center will try to cut corners where they can to deliver something. How the quality is for the something with corners cut all over the place is another story. Sometimes you can get lucky and nothing bad happens but sometimes we can get unlucky and wrong dosage medicine is mixed killing patient, failed mishandled financial transactions, planes crashing, random acceleration in cars, gas pipelines pressure valves randomly closing or electrical grid failures. 😅 If you're working on another todo list then the consequences may not be as dire.
Tests must be about the interface, how you use the thing, not the inner workings, how the thing works. If you change the interface, you change the tests, if the interface does not change, the tests must not need change. If they need to change, it means the tests were bad in the first place.
@@anarchoyeasty3908 There is a case where a bug could mean a missing test, but then you keep the test. But adding a test is not the same as changing a test.
I like to think of tests and requirements as two sides of a coin. Each requirement _has_ to have an associated test (even if that's just an ad hoc demonstration of functionality). For the most important requirements, you create regression tests that can be run before each release. If the requirement changes, you need to change the test. You try your hardest using other techniques to make the tests so they don't care about the implementation -- only the requirement. In that sense I like the idea of TDD because it acknowledges that all you need to write tests is the requirements, and if you write the implementation first your tests are more likely to be contrived nightmares that test things that don't matter. That being said, it is often impractical to fully write tests before you start the implementation, but I think it's always good to keep in mind that they need to tie back to the requirements.
I would love to see ThePrime debate Dave on mockist style unit tests. Mockist style unit tests are the biggest nonsense ever invented and quite often used on many projects.
I think the strongest argument for TDD is that it aligns with how other engineering disciplines will create simulations and tests that ensure correct design before they begin construction. Of course software has more flexibility post-construction than other fields, but it still seems to point to its usefulness in principle
Except it doesn't, you simulate post initial design and iterate over it. TDD expects you to already know the tests before you code, anyone who says otherwise probably never read Kent Beck's works (daddy of TDD)
@@marcs9451 I don't believe TDD expects you to know all the tests before you code. From what I understand, you start with one failing test, get it to pass, then refactor. Then add another test, and so on.
other engineering disciplines have one big advantage over software development though (normally at least): they know exactly what they need to design and their requirements don't change stupidly often
@@kuhluhOG That's a good point. And our software often integrates with other software that also is changing a lot, so the interfaces are constantly changing.
I find you necessarily need to write some code up front when there’s areas that you don’t quite understand. I call it discovery. Then I move into more formal design. There’s a habit in Python of thinking that public/private methods don’t matter, but I find they make it very obvious what needs to be tested and what shouldn’t be. Once my interface is designed, I write tests - no, they won’t always be perfect first time, but stopping to think about them does help me write better code I think. The two major keys are only test interface, not implementation and avoid mocking entirely if you can. Ian Cooper did an absolutely fantastic talk on TDD and I’ve found since watching it that my tests are far better and are far less likely to break as I change things. If you purely test the interface, then nothing should break unless you break the interface. However it can be a battle getting other devs to not test certain things that they deem important and so when working on shared code you can end up really fighting the tests every time you change something
Yea, I typically end up rewriting code right away two or three times within a few minutes. Not just the the parts that would be inside the black box either but large swaths. TDD does make the first iteration better, but that’s really just polishing a turd. It also increases the cost of the subsequent iterations to the point where they often don’t get done which takes far more quality out of your code.
Did anyone else reach a point where they started writing unit tests because it was less tedious than reading console output? That was 100% what got me into testing.
@@thekwoka4707 for some reason, C/C++ are the only languages I ever debug. Not that it makes sense at all, but for some reason it just seems easier to set up tests in other languages (whereas unit testing in C/C++ is more tedious in my experience). The nice thing about tests too is that they stick around, whereas a debugger session is one time only. So its nice to be able to run my tests again if I make another change as opposed to having to step through the code again.
The real problem with TDD for me is that I don't know what I want to do before I start looking at a prototype of it so it's more like a prototype driven development. The problem here is he wants you to go and really think in the architecture of the thing before you build it. But I find impossible with so many moving parts and alternatives on today's production environments. Makes mucho more sense to build small prototypes and iterate over them.
GODDD you're so right that it IS the Rust argument ; I've been making it myself for years without realizing. "it's a bit costly upfront, gonna be a bit slower in the beginning, gonna have to change the way you think about things, but it TRULY pays off in that writing correct programs means less time spent debugging/fixing them". Very astute observation!!! EDIT: I hadn't even reached the part about "less bad habits to unlearn", this is a priceless analogy for sure!
You could say this is an argument for anything difficult, but arguably worth learning. Same argument people make for vim, changing keyboard layouts, etc
The argument is sound for Rust. It can be annoying but it helps you in the long run... by literally preventing these types of bugs to happen. TDD doesn't do that. TDD only PROMISES that when you use it, it will help you, with the caveat that if it won't, you are doing it wrong... I mean, you could easily argue, that RUST "unit tests" your code. It tests your inputs and outputs, it tests that you use all code paths, that you don't ignore errors and so on... but it does it for you. Automagically.
Great videos! We live in a crazy world where critical thinking like this is rare. Never trust anyone or anything that only lists the pros without the cons. I believe TDD is good BUT only in some specific cases like when you know exactly the inputs and outputs to a rather complex function. In many cases where you don't know what exactly you want and so you are exploring the possibilities or when output is random or when building GUI in code or when applying a boilerplate code you shouldn't waste your time on a silly unit test because usually there is no benefit of having such a test and even less of writing it before the production code.
Lately there's been a lot of criticisms about Uncle Bob's predicaments. TDD, Clean code... all being rediscussed. That is interesting because in my country Clean Code, Agile and the likes are hot. Looking what's happening here is like having a look at what things will be in here in two or three years. Sometimes even 5.
Clean code is one of Bobs weakest contributions, though SOLID is a useful framework for thinking about design even in functional programming. Hopefully in your country Agile won't be ruined by certifications and consultants that turn it into waterfall with more ceremony
His argument ignores the fact that sometimes the goal is very well defined loosely, but there are no limits on the specifics. For example, when cleaning user data. The goal is to strip out any malicious code, but there is no limit on what that input could look like. So no matter how many tests you write, the coverage still approaches 0%.
So ironic to hear Dave Farley complain about people saying “it doesn’t work.” He does the same thing; he very often says “it doesn’t work” about stuff that real world companies use ubiquitously
I was on a team that had gotten Agile development to work, but it isn’t enough for the team to go through the process, they have to believe in it. That is why I have only had one team that made it work, all other teams didn’t make the effort to dedicate time to the process. The biggest mistake has been the extremely groomed backlog and having a ratio of technical debt stories to feature stories, without that as a base, it is impossible for it to succeed. On this team the whole team was required to join the refinement meeting and we needed everyone understand the ask, even the QA team, and give their story points with justifications for why it would take that time. So, if you have a team doing this process very well, TDD won’t be as hard. My experience is poor user stories cause more rework than not following TDD. I have to deal with missed requirements all the time, because people using the system doesn’t know how to look for edge cases until the work had already started.
Рік тому+1
“The team has to believe in the process” confront that to the part of the agile manifesto that says “Individuals and interactions over processes and tools”. There is a big misunderstanding in there somewhere.
@ What I am trying to say is, the process has to feel natural and not clinical. When done correctly, it feels like an extension to writing code and not checkboxes that must be completed to release. If you don’t feel that it’s guiding your work then you need to bring it up with the team for discussion, maybe you tweak your team’s process until it feels right. The fact remains that all teams have a process to follow, the goal is to make the least abrasive process.
I literally burst out laughing when said "TDD is used to develop some of the best software in the world" followed by showing a picture of a Tesla Bugmobile!!
I love the workflow of writing a test first, then the implementation. It forces you to think about the requirements of the unit you're writing, making implementing it easier, usually. Also, if you trust the fact that you wrote a good test, then it's also fairly easy to know when you're done. What I really DON'T like about TDD when you take it really literally is the fact that you're supposed to write just enough code to make you test(s) pass. And if you KNOW that, for instance, just returning "true" is not going to cut it, then you're supposed to write another test and then make that one pass again. It's extremely tedious and way too many iterations. It's dumb. It's stupid. I just write a few tests beforehand, checking the happy flow and some boundary cases or errors and then implement it in one go untill all tests pass and then I move on. Way better IMHO.
I love that he claims this applies to game dev. “Instead of iterating on the design of your game, you should just design the entire game up front, write an test suite describing the entire game, and then the rest is just an implementation detail.” TDD = waterfall, change my mind
This is the perfect comment that sums up how I feel about primes audience, even though I am a member of it. So many people on the chat go off confidently about things they are extremely wrong about. TDD is a tight loop. You write A test, you implement the test, you write the next test. It is a mindshift change where before you implement the functionality, you think first about what the outcome is. So if I am writing functionality where a move_unit command when given a new position and a entity, instead of jumping in immediately and implementing it I first think about what the desired outcome of this command is. For the sake of simplicity let's assume that this is a teleport/grid style move instead of a smooth one with physics, but you can do this with more complex logic too. Without knowing every little detail about how this will eventually work, what's one thing I can confidently say should happen. The position of the entity should be updated to the new position. Great, that's a test. In my test suite I create a new unit test and name it whatever. I prefer verbosity so that it acts as documentation when read. MoveUnit_Should_Update_A_Entities_Position. In that test I create a entity, I perform a move_unit command, and I check and verify that the position of the entity was updated. It fails. Because you haven't written that logic yet. Then you go into your game code and you implement that little amount of logic. Now your test passes. Great, now we continue and decide that entity's should have a range that they can move in. Let's call it 3 tiles. Let's go back to the test we wrote, and update it to reflect the new development. Rename the title to MoveUnit_Should_Update_A_Entities_Position_Within_Range and make sure the new position you passed in is within 3 tiles. You haven't changed your code so it should still pass. You are simply updating the conditions to reflect the new intent. Let's think about what we want to do about what to do if it is outside of the range. For the sake of simplicity again lets assume we do not perform the move and instead return an error. But you can do this process with any complexity of logic. So let's name our test MoveUnit_Should_Return_OutOfRange_Error_When_Given_Position_Too_Far (again you can name things however you / your team likes. This is just how I like to write them because when I read the titles of my tests it describes perfectly the desired functionality of the code. It serves as documentation). In the test I create an entity, I provide a new position that is outside of the range, and verify that it returns the error. It fails. Because we haven't developed that code yet. Hop into code, add your invariant check and say if the range is too far, return the error. Now your test passes. Does the first test still pass? Great! You have confidence that your refactor did not break anything that you had written before. Does the first test fail? Great! Now you know before you ship it / get further in development. You can continue this process the entire process of programming a game. It is a slow process at first, breaking your usual flow of development. But it took me 3 days of doing this in my work to get in the flow of doing it. As you are developing you will come up with new requirements that break assumptions you made earlier in your tests. That's fine, your tests are not concrete they are a reflection of your intent. So go in, change the tests to reflect your new intent, then update your code and verify that the changes all pass. You don't need to update your tests when you change an entities range while tweaking the data files for your game. But when your underlying systems need to change (or be developed) you definitely can (and I believe should) do TDD
It's unit testing. You test a unit, not the whole game. Implement the behavior in an iterative way, and you likely will get an excellent design. You're right in that design upfront is not what you want and is one of the things TDD allows you to avoid. While TDD might not naturally have a test suite for some aspects of the game that deal with hardware input/output, you can focus instead on testing the design you make in the programming language you program in. For graphics and the such, if there's a specific behavior you're testing for, you'd have to have sympathy with the lower-level components that the code might work with in order to understand how to test it. This really doesn't apply to a lot of game developers, so there's likely much less coverage on how to handle tests for such scenarios. It is a great indicator of buggy code when you do not know how to create tests for it, i.e. the code is not easily testable.
you do not write all tests up front you just write one interaction further in tests than you are in code, so you would normally just make sure your layers for example physics work independent of your player logic with unit tests so you can safely layer on top.
I have tried TDD once and just like I heard in a blog before about TDD, a really big irritation moment for me is the fact that when writing the test code first before the main code I then get no help at all from say intellisense but it instead throws a ton of errors at me cause I try to call undefined functions and variables that i have not yet written in the main code and that the test code is trying to access. You are more or less therefore all on your own writing the test part and all the error messages makes it confusing and imposible to really see if you have even written the test code correct when all u see is errors. For simpler tests it might work fine but for more complicated test It will cause issues for sure.
Part of successfully writing a failing unit test is having your code compile. Step 0 in a way. People shouldn't really write much code that doesn't compile. I think that's an IDE issue, I don't have this problem with Jetbrains IDEs. To me it sounds like you need to iterate even smaller than you thought, and do it painfully, until you kind of get a Eureka moment as far as figuring out the sweet spot of writing your next feature without doing too large of a step.
@@antdok9573 I just might need to give it some more tries maybe. No expert yet in TDD. As said I only tried it once and as they described it in the blog more or less. I however also might be a bit spoiled with intellisensee in that even if I know how to write the code in my head it is still a confirm from the computer that my code is correct and that I am on the right track so when its not working as expected, like when I wrote the test back then, I get a bit put off. It might also just be an IDE / code editor thing as u said.
@@johnpekkala6941 I rely on my PyCharm intellisense to suggest implementing missing functions/classes if they're not implemented in my unit tests. It will also just flat-out implement nulls/temporary values so that tests fail successfully. Yeah, up to you if you want to try TDD. I'm no expert, either, but I have reaped its benefits already. I don't have much experience implementing it in an existing codebase in an efficient manner quite yet. That's most certainly pro-level stuff if you want to become very quick/experienced at it.
I do TDD at my current job only because it's required. That being said, one time I do find it very useful is when I'm fixing a bug. Write a test to replicate the bug and then fix the code to make it work properly. This ensures that the bug will not return if someone later on chages something that would reintroduce the bug. My butt is covered "Hey if already fixed it and here is the test in the git history"
just watching it, when he mentioned the Devonshire report, he refers to the DORA Devops report. As stats or data based on surveys are to be taken with some salt, it does bring some data to backup his claims. Also, I think it is not understood (the blue part). Or the "you can't write bug free code", that's not really the point. But doing late testing ends too often in the situation you described during this part of the video.
All this stuff about him talking about not being able to test an interface that has multiple responsibilities is sort of a symptom of how TDD is not being followed correctly. I don't like the subjectivity of his "take" on TDD. Doing TDD with code that already exists is a greater challenge, but it works out if done iteratively in small steps. Many things in TDD can be subjective, since the cycle itself is pretty vague (how much refactoring? is my next failing test a new feature or a demonstration of an existing bug? etc). That said, there are some pretty clear rules as far as unit testing and the such that help clear up that confusion anyway.
the thing of TDD is that it is not a golden rule. It is like writing books or novels, there are maybe some greatest authors tell you that you can first make the outline and all the plot-line to make your work easy or greater. But there are always different ways to for a novel to be great. So TDD is always a method, but should not be considered to be a rule for programming.
In my experience, yeah, TDD sometimes fails and you need to throw everything out, but, those times are exceptions, not the rule. I think it is not fair to take a exception that not work, to say "this not work". I agree with "you haven´t done enough". Yeah, sometimes you right code just to see what happen and thats fine, but, once you figure it out, now you should know what you need to do, there for, you know what test you should create. I will say, sometimes I hate doing TDD, but saying "i hate doing this" and "this do not work" are not the same thing.
The problem is not writing test but how you write it, i am working on a big java project right now and running a test take at least 2 min on my computer (yup). So running a test 30 times ,booms, 1 hour gone. I love writing tests, but i just wanna get the job done and i have zero patience for this BS.
Having worked on systems in "hard requirements" engineering, TDD works very well. After all, the laws of physics don't change on Tuesday. However, prototyping business systems and doing rapid delivery is antithetical to TDD approaches. I *have* had the entire business change focus on Tuesdays. Tests are great, when you build them depends on the level of chaos the requirements are in.
Martin Fowler has written about the saying, "If it hurts, do it more often" as it pertains to activities like deployments and integration. The same applies to TDD.
"How do you know when you're ready to write code?". -> Exploratory test (inline code logic in the test itself). Once green you can refactor and at that stage you started writing code (extracting into classes etc).
If it improved code quality I would be happy to do it. Everywhere I have worked where TDD is the norm has had some of the worst code quality I have ever seen. One companies was so bad I chose to quit rather than stay and have to sort it out. TDD is in practice just a heuristic or andragogical approach. It is training wheels for people that struggle to think in code and for anyone with even a modicum of an acumen the benefits are non existent and the detriments are ubiquitous.
I agree that tests should be as large as possible, to decouple them from the implementation details. But the flipside is that when I'm building up a complex system, I need to be able to verify that each component is working correctly. If I only have one end-to-end test and it fails, how am I supposed to know which of the 30 functions involved is causing the error? Personally I struggle to see any way to build up a conplex system other than breaking it down into components and building each component one at a time, checking my work at each step. And to check my work I... Test it. So honestly I find myself doing TDD either way. The only difference is whether I keep the tests around or whether I chuck them out. But a lot of the time I prefer to eekp them around. Perhaps refactor them to be more abstract and generic. And yes, when I change a low-level interface, that may break a lot of tests. But then again I need to test the new implementation anyway, so as to make sure it has all the behaviours I expect it to. So I may as well do so by fixing the broken tests?
I’ve realized that the arguments against TDD only exist on twitter or UA-cam comments, but not in real life. None of you actually think companies building anything world-class don’t write tests. You think stripe or even Netflix don’t follow TDD? Imagine a world where what prime says is true, where people just start coding with no pre-written test or specification diagrams to code against. That sounds like a mad world imo. No way I get into a plane where the devs are like “you know, I don’t really believe in tests, I just code because specifications may change anyway”. Come on
I think the only way to get TDD to work is to sandwich vertical slice architecture between two fat layers whos interfaces never change and then your units start and end with the top and bottom layers. This way the fat layers will be rigid as not to require changes to our tests during refactoring, but the middle layers will be flexible to change so we can make them as simple or complex as necessary to accomplish the goal. The problem again is that the fat layers need to be perfectly designed and unchanging, so if it's not a type of application you've made before, then you likely will not know what those interfaces will be, thus the problem returns. EDIT: Yes this is more similar to BDD or integration tests, as Prime said, and yes the entire IO layer will need to be faked/dummied/mocked, so it'll probably be a third fat layer layer that is tested outside of TDD. This means that if all your program does is take some input and save it to a file then your unit test basically tests nothing because there's no business or domain logic in the middle. It would just be a bread sandwich, no BLT.
I think the only way to get TDD working is to think of TDD as a "design by test first." Otherwise, yeah, you do end up in a vicious cycle. There's tons more to TDD than just that, but at its core, Dave wants us to think about developing behaviors (less so testing individual functions, as Primagen states) for our code based on the tests we're creating. I re-call him wanting to rename it to Test Driven Design. It's ok to break out of the cycle of TDD to do exploratory code when you actually have zero clue of how to write the unit test as he mentions at 6:00.
BDD started as an idea on how to teach people to do TDD properly. Only after some time it got overrun by tools and technologies, as most good ideas in SE do.
I try to write declarative code as much as possible, and logic (aka spaghetti) as little as possible, because then compile time checks like type systems and such don't let me compile wrong code. The code turns out very testable, but often the tests would just replicate 1:1 the resulting declarative code in some shape or another, which is not very useful, because at that point you can just look at the code to confirm that it is correct, without tests. If you write declarative code, your code turns into these bricks, that you can replace, reorder (in some cases), move out into functions or own libs (makes it easier to open source), they are easier to read, to write, to edit (I often come back to my old spaghetti logic and can barely understand it, I often come back to my declarative code and can pretty much continue where I left off). Builder/Stream (Iterator) APIs, patterns/algebraic types, aggregation (components in many frameworks, Leptos' , etc), sending messages (reducers, channels, etc) are all your friends. The fact that most loops can be replaced by streams also makes me think that they are pretty much declarative in their own right, just written slightly differently. When you do your types, give them more meaning, make their meaning be your logic - instead of checking for nulls, numbers in ranges, etc. make your functions accept only correct inputs already. In the same fashion, the most correct return type would also limit possibility of wrong outputs (since function logic has to match output type), as well as make sure units glue together better. Often your logic that was like `assert check_a(value); assert check_b(value); ...do_something_to_value(value)...` would turn into `A a = a(value); B b = b(a); ...do_something_to_b(b)...`. As an example, function that outputs Days and function that takes in Seconds would need a conversion in between them; function that outputs u64 and function that takes in u64 are prone to rather silly mistakes. Bonus example, currently on implicit-clone crate of yew stack, I was thinking whether we should be adding methods like `.push(item)` to our immutable arrays that would clone before adding an item. Some sort of result type that tells user that the array has been cloned and they should consider the ramifications (mainly performance) of that seems reasonable. Rust API does this with `[T].repeat(n)`, which returns `Vec` rather than another `[T]`.
The only time I do some kind of TDD is when a problem is very well specified in advance, something like writing/porting a data structure or hash function. Other than that I find it slows everything down, especially at the start of a project where many things are not even decided/specified yet or the customers change their mind very often.
That's not really TDD if the code already exists in some form... That's just a test harness to ensure your new code behaves like the old. Your assessment regarding new projects is 100% accurate.
I don't hate TDD. But I hate TDD when: 1. It is used as a replacement for a type system 2. It becomes a religion TDD does nothing to help with overall system design. If the design is bad, you will find TDD tests that are bad squared.
All I took away from the TDD argument is that if I don't like snowboarding, it's just because I didn't snowboard enough. So the beatings will continue until morale improves?
One problem with TDD is that during refactoring, you may want to change the interface too because, well, you came up with a better interface. But this means you have to rewrite all your tests. This is not only expensive, but it also means that you may introduce a bug into your tests, which means your tests don't provide the sort of 'safety net' during refactoring TDD promises to provide.
I mean, at the moment where you are refactoring you are risking to introduce some bugs. And yet you want to be able to refactor because your code is bound to be tech debt a few years from now. And the less you have to change the test for that refactoring, the better. For a specific piece of code, the further you are testing, the less likely you are to have change the test while refactoring, because it won't be bound to implementation. The only way to have change-free tests, is to not write them in the first place
31:20 The thing about a TODO test is that you don't disable it, and it fails when you run the test suite. So then if you forget to get around to fixing it, you get a reminder every time you run the tests, and anyone else who runs your tests can see that it's unfinished (and maybe they will decide to fix it). Then if the code breaks, you have a very good idea of where to start looking for bugs.
TDD works if you just test the highest level API. Often that means writing a test for a service or controller only. Somehow everyone got the idea that good TDD is about testing every function. It's the opposite. You just translate the high level use case (e.g. delete a record) into a test, and that's the only test you need to write. You don't need to write tests all the way down the stack. As a side note, IMO a unit is a unit of functionality.
It sounds like you are describing the highest level API as in what the consumer uses. And that is not at all when TDD works best. TDD serves as documentations of intent of your code. If your code has a class that has logic in it, it should be tested as part of TDD. You only need to test and document the public functions. How the code get's there (the implementation and private functions) is it's own business. But your TDD tests should cover the public api of the class. When Method A is given parameters B and C result D should occur. Or Error F should be thrown. That way when something breaks because of a change in the code either in this function or in a function that this function relies on your tests will fail exactly where the error occurs and you can fix it.
That's integration tests, a unit is a single piece of functionality. With a web app and CRUD, there's tons of layers between the request and the database getting a record entered into it or removed.
A unit of behavior. A unit test is still small generally though, so you probably don't want to go up to the API level. That's integration testing. Hardware input/output shouldn't really happen in many of these test cases.
The thing I hate about TDD is just that it is a *lot* more work. Not only do I need to code the thing, but I also need to code a thing that defines what the thing should do given a certain set of circumstances. plus, some tests are just objectively harder to code than others. Testing if a function added two numbers together is WAY easier than testing to make sure resizing the browser window moved the elements to the exact spots they are supposed to be. Sometimes you have to do a lot of extra work just to even test the damn thing did what it is supposed to, sometime it can take many times more lines of code just to check all the edge cases. Plus, you have to test all the fail states failed in the way that you want the code to fail. It is a lot if you want your tests to be any good later when you are making changes to your code and the tests tell you exactly where and what you messed up. Otherwise, what are you doing all this extra code writing for?
so you hate longterm perspective cuse more work? Not to mention that you seem to hate not just TDD, but tests in general. And with that mindset, either you will those tests now, someone will cry in the future. Hopefully you, cuse if you're the kind of person that due to laziness creates more workload for others in the future, then I don't like you
@Mish844 im not strictly saying that tests are all bad, my comment mostly laments that driving your development on tests makes making the product very significantly more labor. Plus, since garbage in garbage out, if your tests are wrong at any point, they can lead to you either making the wrong product, or you can end up with more surface area to write in mistakes. With TDD, the whole point is that your tests are supposed to drive the development, so it is not much of a stretch to say that the tests describe the thing you are building. But if you are adding a feature and an older test fails, without any other context, you cannot know if the assumptions made when that test was written are still valid in the face of the changes that you made. So if you change the test to meet your new assumptions, you are now violating the description of the product that test was supposed to represent which theoretically undermines the point of having written the test in the first place. If you rigidly adhere to the test, you may not be able to add the feature, or you may need to integrate it in some convoluted way which can lead to greater code complexity and/or poor performance. These are the negative side effects that you get from TDD. Done well though, TDD gives you peace of mind that you didn't catastrophically violate the expectations of some other code that is well distanced from yours. Not all projects and teams are suitable for TDD. There is a very real cost to using it and it is very easy to use it wrong. I'm not saying "don't", but it isn't a magic solution. It takes work and discipline from everyone involved.
A point of praise for TDD is making things more testable because it'll naturally make you write more functions/more classes so that you can test it. But imo you've sort of just forced yourself to break up something which was naturally 1 unit into a bunch of non-reusable puzzle pieces
Yes, very true. While poorly designed code is often hard to test. Just because code is designed to be testable doesn't make it well designed. Often time designing for a test makes it harder to understand and more complex.
I would argue that imposing the requirement that things must be very testable actually makes it more likely that the code becomes poorly designed. While a lot of good designs happen to be very testable, counting those are actually a bit of a moot point, because the programmer who was capable of finding those good designs would usually have used them regardless of that requirement, for the simple reason that they are good in the first place, so imposing the requirement of high testability does not actually change much in terms of more use of those good designs (aside from serving as one of the reasons why those designs are good in the first place). What we instead get are those cases that have varying degree of how well designed they are, which then gets imposed an additional requirement on how the design should be. Since such things, especially those of less than great designs, makes it harder to solve the same problem because of the complexity from how to handle the new requirement, and as a rule of thumb design quality is inversely proportional to how much over their heads the designer is, and more complexity can certainly cause this. It should be noted that sometimes the most appropriate thing is not to do classical automated testing for something, but instead something else. This is especially important when the main source of failure is conflicts with forreign interfaces, which classical automated testing would often need to mock out. An example of this would be testing interaction with persistent systems that you do not want to damage, though some of these can be handled with special testing enviroments, but these enviroments are often way too heavy for the kind of automated testing needed for TDD. That also brings us to another point, which is that while it technically possible to test a lot of weird things like those mentioned in a video game, those things often require a lot of skill and effort to set such tests up for, often far in excess of their value, and TDD is definitely not enough of a miracle that it is worth doing at any cost.
With TDD, you are supposed to test at a pretty high level. If you find that TDD requires you to change how you structure your implementation, you are probably testing too low. (Yes, many tutorials get this wrong, too!)
I think Dave's definition of a unit test scales a lot more than Primes... Tests for CRUD are just as valid as tests on functions. I think the only rule is that the tests should be small, so you would me more likely to use stubs rather than real services where services reach out elsewhere.
@@sanjayidpuganti I have a published interface spec and don't want to have to spin up an entire test environment to test my implementation. Each endpoint is small and simple to test - so why not... Integration tests would probably need artificial inputs to touch all code paths right now.
I was interviewing for a backend position once, when the lead tester for the project (30 people) asked me if I do TDD. "Sure", I said. Then I laughed, and then the tech lead laughed, and then the team lead, and then the test lead himself laughed, and then the project manager laughed, and then the HR representative laughed. I got the job.
I think you nailed it at the start Prime. If your units are tiny, the value is negligible. Tdd seems to work best when you approach it like mocked integration tests. Test everything through the stack, from fake input, to fake output. Then you can refactor to hell in the middle without affecting your public api. If you apply it to the level of a single function you couple tests to too small of a unit, which means refactoring screws up the tests. You actually write most of your shell scripts this way, give it input, run it, check the output. But there are also certainly places where tdd does not make sense. Lots of small abstraction packages just don't make sense to develop using tdd.
Thank you! I thought unit tests were just that they didn't cross a port. Unit tests that are too tight are bad TDD to me. It's why BDD came about. To be fair though, I am not a fan of TDD.
@@Tkdestroyer1At a single function level you it's very difficult to differentiate between implementation and behaviour. Too tight basically means the code under test is so simple that it only had a single behaviour, and if you want to move that behaviour, you have to update the test. Think unit testing a class, then deciding that class needs to be split into two, now all the tests and setup and teardown needs to be updated. If you instead target your tests at the public api of your system, you can split and recombine the individual classes underneath as much as you want and nothing needs to change. Integration tests is a terrible name, cause to some people it means spin up the dB and the Internet and test for real, and to others it means test the integration between the internal modules but still isolated from io dependencies. That second one is the golden path for Tdd.
I reason about it like this: 1) You can't write code until you have a complete, rigorous and unambiguous Detailed Design Specification (DDS). That is detail down to the "unit", define it as you will, function, module, class, Down to the API, function parameters etc. Whatever. 2) You can't write tests until you have a complete, rigorous and unambiguous DDS. As above. 3) It follows then that it does not matter if you write the tests first or the code. Because they both derive from the same document. At the end of the day they have to match up but they do not depend on each other. 4) Tests should be written by different people that those who write the code. Thus greatly reducing the chance that bake in the same misunderstandings of the requirements/design. 5) For extra reliability tests and code should be reviewed against the DDS by yet other different people. Someone please tell me if and where the above logic is faulty. That is how things were done back in the day when spent far too many years on test teams for Boing 777 flight controls and such things. Now, I can well imagine that in the modern world many do not have that rigorous DDS document. What with being all "agile" instead of "waterfall". Seems to me then that the tests in TDD are bing hoisted to the front of development as a replacement for that DDS document. Well, if you want to churn out buggy code that is the way to go I guess :) It also seems to be that a lot of development forgoes the DDS and goes straight into code. Exploration is done. You don't know what you want until you have done it. Things may change along the way, functions, parameters, API's, protocols, whatever. It's right when it is right. Demanding tests be done upfront and then having to constantly change them to keep up with the exploration is just a daft waste of effort. Having said all that, I would have more sympathy for TDD if it was ever said that the people writing the tests should should be other than those writing the code. Satisfying 5) above in the absence of a design spec. God I hate all these self proclaimed software gurus who constantly advise us how to work, and then say "You were doing it wrong" when everything goes to shit.
If I ever have the privilege of working on a team, I will be taking all of this to heart. That being said, for my larger projects I do write up a pseudo-DDS which does greatly speed up my development.
@@ja31ya I wish you well on your journey . Working with teams can be a joyful and educational experience. When the team comprises people of like mind and know what they are doing and have a passion for the project. It can also be miserable. When nobody is enthusiastic, only in it for the money, communications is terrible and it's not possible to trust everyone is on the same page.
Software gurus who no longer write code tells you how to write code True you need a viable design before starting work, since I work alone my design is informal but takes a lot of time before any meaningful work begins, it seems I am a dumb person because I spend a lot of time thinking about the design, even in the middle of the project when I see major flaws in the design, I stop dev effort and concentratre in design again I don't know how Agile address the need for viable design and importantly stop when the design is flawed, adjust the design and throw away some sprints in the process
It is fun to see how so many programmers have taken "unit" to mean "function" or "class" when in its originality it means rather a "funtional unit of the application" and in the terms of viewing your application as a black box that "functional unit" actually equates to "functional requirement" and therefore is equivalent to a end-to-end integration test. The interface is the functional requirement that is derived from analysis and therefore the underlying implementation can change however much as necessary and the test will remain stable unless the actual functional requirement of the application changes. That change may be due to later on clarifying with more detail what the original requirement was suppose to be or it may be removed/replaced entirely during product life when that requirement no longer serves a purpose. I urge programmers to try and use "unit" in terms of "functional requirement" when talking about TDD as this helps understand the concept better. If you really want to drive the tests down till actual single functions/classes then you can do that as well but you have to realize then that the "black box" you are testing the "functional requirements" for is that class/function and when that class/function no longer is required by it's caller those tests are no longer useful in the wider scope of the application. Similarly as the end-to-end tests would not be meaningful when the application is deprecated/discarded. In terms of developing you start from the outside in in terms of TDD. First tests for the application as a black box. Then during refactoring of your code you will have a better understanding of how you want to organize the internals of the application (which at that point already works since the outermost tests are green and thus refactoring is easier since you have guide rails to indicate when something broke) and thus you can start defining the "functional requirements" of some internal components and write tests for those and refactor the code to move those implementations into those modules/components. With every level of depth you increase the chance that that implementation will become deprecated and those tests will therefore get discarded. Do with that assumption what you will.
My main problem with TDD is, it doesn't respect the cognitive process of complex problem-solving. My development flow is, I make it work, then I improve the quality. And I never get the idea OOP and TDD can live in harmony. In the OOP world, it is ridiculous how people have gone far to make it is working (take the implementation and API of Mockito as an example). So you better familiarize yourself know the difference between Mock, Stub, Fake, and Spy before the next interview? And if you are familiar, you spend your cognitive energy in a paradox of choice situation wondering what to use? In FP, you can unit tests just with the core language (libs could be nicer though). I don't know how anyone justifies the high amount of investment (time, effort) to the little benefits TDD brings. I'm not against applying TDD for the correct situation. In some situations, that's the best way forward. But TDD camp doesn't tell that. The say if you are not doing it all the time, you are a "cheater". Great job @ThePrimeTime on reacting to the video. Some of your points are gold!
Eventually, TDD has almost no overhead. This has been the case for me in my code base. This took me about one to two weeks or so and mostly involved me exploring what the hell I was supposed to be doing in the code in the first place. Once I actually figured out what I wanted to do, the TDD cycle now takes me 15m-2hrs, depending on the feature, and it remains that way as I sparingly re-visit the code from time to time to make new changes. Most of the time I wasted in that 1-2 week period was not because I was re-writing tests, but because my understanding of how I was working with some frontend/backend technologies had changed entirely. Regarding your question about the different testing terminology, as far as I know, generally, you should do something other than mock things out. That is a last resort, in my view. Most of all create a working interface. You know, perhaps if I've explained myself enough, in a way that you can run your unit tests with the fake sort of objects that are simulated in-memory and that same test would work when you use the actual implementation that uses a database/internet instead of something in-memory. At that point you're testing an interface and that's going to be a lot of test coverage and quicker dev time. That's a big reason why I enjoy Python's duck-typing and TDD.
The conclusion is very true IMHO, there's no point testing implementation details, you're just making your future life harder ; tests are meant to ease refactoring (or at least give you a certain level of confidence when doing it), not the other way around.
I feel like TDD might be useful for those heavily spec’d technologies. Or in a sense you can call TDD spec driven development. I think some mature blockchains such as ethereum can benefit from TDD as the specs are always heavily thought through, and won’t change easily.
Dave on his channel uses a term often: "behavior driven development". I don't think most software change easily their specifications (once the core of the system works). Yes, you add features, but the core of the systems rarely change radically to the user's viewpoint. And if you do your tests in a TDD way, then the tests will be much easier to re-do once the specifications change, because it was made to favorize tests creation in the first place
I write my tests after, and it's not hard. TDD guys saying I need to restructure my code if I don't practice TDD are mostly wrong, or with TDD I would have to restructure it anyways. No time won. If you can't write testable code without TDD then that sounds like a skill issue to me.
There are definitely situations where it's hard to write tests after, but I think that largely misses the point of having to worry about that nuance. It's more of a persuasion technique to try and help convince you there's a simpler way of writing tests. The point of doing TDD is that your design is led by tests, not that it was easier to write. The simplicity of writing beforehand is just a side effect but a major reason why people continue to do TDD.
No, that's just wrong. No one claims their favorite development method achieves bug-free code. We all know that's impossible, including the people advocating for whatever development/test method. The goals are usually to reduce the number of bugs, or to make code easier to maintain in the long run & similar claims.
So... is TDD just something you do to learn to write modules, starting from the very small pieces of code, and then when you've climbed that ladder you have a better understanding of how to write code that is testable and more modular, and you throw that ladder away and instead of being dogmatic about it you apply the lessons learned on a scale where it makes sense when it makes sense? So is TDD more like a kata for those who still don't know how to write code that is modular and easier to test?
I am genuenly under an impression that book definition of TDD doesn't actually work (or not for the reasons described) and they are just confusing correlation and causation. TDD forces you to write tests (which is good, and I need it >_
What infuriates me is how pointless prominent name dropping is - there's plenty of successful companies that don't practice TDD, so by the same logic, should we all just copy them? Heck, Kent Beck worked for Facebook and he was in for a shock at their approach to testing (they barely practice it, they prefer testing in prod with the infra to rollback smoothly) and he came around to how there's more than one way of achieving software quality.
If you are making yourself do TDD, something is wrong. TDD is just one of the ways of writing codes you naturally do in some situations. More specifically, I write tests codes first when the codes I want to write are clear and obvious while tests codes are not. In such cases, I write tests codes first to consolidate the strategy of how to test the codes.
I used TDD to develop stuff that’s easy to break. Like writing functions to read bytes and determine a variable length quantity for the midi spec. Just an easy way to verify it’s working. It makes send that nasa needs those types of tests for their calculations. TDD is a tool that can help when engineering things that are prone to breaking, but likely won’t need major refactoring. Like prime said, black box function testing makes tons of sense
I love TDD and I'm proud. It's a fantastic method for developing software and think almost every rebuttal people give for why TDD is bad is almost always a misunderstanding of how to write good tests.
Prime's point about tests not catching every bug. That's fine. That's not a fault of TDD. When a bug occurs that you didn't catch at first, before you fix the bug make a test that describes what your ideal output is given the state and input necesarry to create the bug, and verify that your assumption that the bug occurs when the state and input are X actually fails. And then fix it so it doesn't fail. This isn't a crime against TDD it's how TDD handles doing bugs. Now, you know that one bug will never come back. Because if it does your tests will catch it because it looks for that situation. If it fails another way then repeat the process.
Where did this insane idea that tests are written in stone and can never be updated or deleted or changed. That is not true. TDD is about documenting the intent of your code. If you no longer need the code, or if your test is no longer correct you can throw it away and change it. Just like we do to code. Imagine someone complaining that the requirements changed and now they need to change their code. It happens. Change the code.
@@anarchoyeasty3908 About making a new failing unit test that describes the bug, bingo. I think it also helps if you consider that you don't want to overcomplicate any single test either. Having two asserts in your test case versus one, just to catch that bug you didn't earlier isn't great design of a unit test.
As far as I understand, TDD is almost what you've said in the past that your approach is (writing flat, one-dimensional code at first and then refactoring it later), but with an added step at the beginning where design specifications become written into code
I think I have a similar approach. My workflow usually is to 1) start with a dirty prototype, 2) write integration tests for the whole feature/module (sometimes I start with that), 3) refactor the shit out of it, 4) if I am happy with the result, write unit tests wherever it makes sense. So I guess it's kinda TDD, but only partially and not always.
What you are doing sounds good and I feel is better then TDD, but it does not sould like TDD, where you need to write the test before writing functional code
trauma driven development
Blunt force trauma driven development
@@fabricatorzayac Grug's favorite way to improve his co-workers' productivity!
@@packediceisthebestminecraf9007 *grug tempted to reach for club*
Traumatized Developer Disorder
Haha on point!
I do TDD when I know exactly what I need and how it has to work. That's not always the case. It's a tool like any other. No more, no less.
The problem is that test coverage is a measurable stat, and bigger number better.
@@ShadoFXPerino It's a starting point and not an absolute. You can still do coverage afterwards and extend the tests. It's not set in stone.
TDD shouldn't focus on implementation detail. It should focus on creating a healthy design. I do agree, however, that it is a tool.
Huh. I only use TDD when I don't know what to do.
@humbei it is completely fine to start without test, but you soon fell into the tdd cycle and refactor your code driven by tests
Far more common is technical debt driven development
//TODO: complete this comment
@@adama7752 DO NOT TODO! At least don't commit it to your repo. TO DON'Ts as they're more commonly known :)
// TODO: 2008-04-23 Remove after other team gets their shit together
@@antdok9573agreed. Even throwing not implemented exceptions is something you won’t see until runtime. A comment easily goes under the radar.
tech debt is the reason we all remain employed 😂
What's worse than TDD is an extremely opinionated colleague who judges you by your lack of interest in TDD...yet production is full of their code with no TDD
pretty sure the engineer you’re describing is Dave from Continuous Delivery
For me the "problem" with TDD is that it amplifies the advantages and disadvantages of just testing. And there are MANY bad practices in tests. The most common one I see is testing implementation rather than behavior (that alone is the source of 90% of the pain of testing and refactoring). If you have bad testing practices, TDD will make everything worse, but if you have good testing practices, TDD will make things better.
What are some good testing practices?
@@kelvinwalter8623 Not testing the implementation. Trying to keep tests declarative. Covering boundaries of the inputs (test min/max and +1 beyond). Checking that failure states are predictable. Try to write tests somehow convey intent because code often can't.
This is why the person who came up with TDD hates it being called TDD, that it should be called BDD (Behaviour Driven Development).
@@martinbecker1069 And BDD got co-opted by the cucumber/gherkin people even though it doesn't need to be married to a silly requirements specification.
@@sqeaky8190 "requirements specification"?
I'm in the TDD camp, and I rarely write unit tests. I feel like unit tests are a bit of a straw man here, they are a terrible candidate for TDD for the exact reasons you gave: when tests are small they are difficult to refactor. You are coupling tests to functions/files/classes and if you ever need to move logic out of these places then you also have to change the test file which slows you down. TDD is better when you're writing integration tests: all your test should care about is the inputs and the outputs, so you're free to design and change the code however you want during refactoring as long as it gives the correct output from the same input.
What's your stance on mocking?
@@JChen7 I like mocking, but can understand why some people dislike it. It's a tool that's easy to abuse. If I want to write tests that test my whole application, but I don't want to test external dependencies (like database connections, s3, etc) I like using mocks. What's important is to mock as little code as possible so that you are only mocking the external dependency but are still testing all your own code, and you do this by pushing dependencies to the boundaries of your application. For example, if you make a database connection, make sure only the logic necessary to do that sits in one function and only mock that function in your test. A good way to practice doing this is to force yourself not to use mocking libraries, but instead abstract your dependencies behind an interface, and then create a service that implements that interface which you use in your tests instead of the real thing. When constraining yourself this way it is difficult to abuse mocks and can teach you good habits in pushing dependencies to your application's boundary, although It's important to be wary that you are adding extra abstractions which can create a different set of problems later down the line.
The dude in the vid specifically mentions unit testing.
@@Turalcar Hence why I said he was straw-manning unit testing, did you not read my response?
@@Jak132619 Not Primegen. The guy he was watching was suggesting unit testing
He said “…if you can love a tool.” Not “…if you can love at all.”
So.... 1.25x was still too fast for him...
The chatter who put "blue = rewrite everything (in rust)" is a based, high value programmer
but rust = red
I tend to write code twice: the first time, I just play around in a scratchpad repo with spaghetti; the second time, I structure the code more sanely and add tests. The second iteration can be TDD because by then I've figured out what I want the code to do.
This is actually still TDD which is a big thing no one understands (since I don't think anyone actually reads anything about it). You are describing a "spike".
@@samjohns8381 yeah, I've never heard that term used in discussions about TDD
You just described "requirements discovery"...
@@edwardcullen1739 I described-well really OP described-figuring out how a requirement will be solved with code.
This 100%
Thinking about comment at 5:50, if refactoring your code causes your black box tests to break, are you sure you are testing a black box?
I'm pritty sure that he thinks that when you have small unit tested units, the refactoring will almost always change the interface of multiple units and that will brake the tests for these units.
Agreed. He is neither black box testing nor using Unit tests.
One thing I really hate. 19:00 Is when people call surveying or observational studies, Scientific. They are inherently not. Science needs a hypothesis and your test of the hypotheses can't have a pre-known outcome. This is really grinding my gears given that test driven development needs to have the test written first. Now, it's good to get collect data and to write it down even if you don't have a hypotheses and it can find truths. But it's not Science. Science doesn't equal getting data.
but but it fits his narrative
0-test development is the best XD....
1. i can ship project faster..
2. I can get more money from client for fixing more bugs....
3. but fixes create more bugs, means more money... and of course job security...
tests don't prevent bugs. How does writing code to tests for bugs I can think of testing prevent bugs I didn't know to test for?
@@NathanHedglin but testing prevent know edge cases right?
we can avoid fixing that :)
@@NathanHedglin Not all bugs are new bugs. Some bugs are recurrent bugs that reoccur when you make changes. Some changes can affect other parts of the code
Refactor part of TDD is also refactor of your design (not just implementation of your function). So yes, when doing TDD you throw away tests. Tests are a way for you to learn how to design, not just how to implement your function.
This means, part of the work is throwing away all your code and your tests. Just like when you do it without TDD and you throwed your function away because you gained knowledge.
Summary :
Refactor = refactor implementation + refactor design
Refactor X = create or update or delete
You know, to do refactoring efficiently, you want to have tests that you do not throw away, such that you are sure that things keep working and the steps you took worked out right. If you are throwing practically all those relevant tests out, then you are not so much refactoring as you are throwing out the old and starting almost from scratch again.
@@sorcdk2880 Someone removed my folloup comment. Just apply parrallel changes refactoring pattern. and once you are done, remove the old impl and tests.
@@sorcdk2880Well, you'd be doing that without TDD also. If you're throwing away full functionalities and moving away from the original specification, then with or without TDD, you're changing everything and essentially starting from scratch anyway.
I think the issue here is the misuse of the term refactoring, if you're changing out everything then you're restructuring, not refactoring.
No matter what you use or whatever your methodology is, you're changing out everything anyway. Throwing away useless tests at this point is irrelevant regardless of how you implemented them.
@@ryanbeatbox Not exactly, outside of TDD then the timing of it and the way tests are designed and their purpose makes it such that you can often get around this problem outside of TDD.
Oh, so now I have twice the garbage I wasted my time on to throw away. Does not sound like a good proposition...
When working on projects with other developers I'm significantly more concerned when JS/TS programmers import 600 moving target dependencies maintained by thousands of strangers than I am whether they wrote tests for their 20 line function.
Yep. It doesn't help when you import one or two things you actually need but those things import hundreds of modules.
This is not remotely limited to JS/TS.
You see it in tons of stuff.
Add one crate to your rust project, suddenly 300 download.
I think Ian Cooper's talk about TDD addresses most of the problems with TDD better than Dave's video and most of the points that Prime raises as well ua-cam.com/video/EZ05e7EMOLM/v-deo.html&ab_channel=DevTernityConference.
I personally do not like TDD that much, but I think it is extremely useful when you want to design an API/a tool that is supposed to provide some kind of a service to third parties/other services in-house because I find it easier to design the tool itself when I have to make black-box assertions about the tool's interface in a way that this tool's users would make, and it is really nice to have those assertions in the form of tests.
I also dislike TDD, but this talk by Ian Cooper is the first one that explained the idea clearly and made me think about the usefulness.
The main point is not doing it at the unit test level. That is too low, wrong, and not valuable.
Prime you should see this talk!!!
@fetherfulbiped Ian Cooper's talk about TDD is one of the best ones, addressing the problem that people think you should do TDD to test-drive class methods. Farley is repeating himself in many of his videos, due to commercial interests, however I think he is an excellent talker. Proven by this video which describes TDD very well. ua-cam.com/video/ln4WnxX-wrw/v-deo.html
I think if you have clearly defined requirements of what you're trying to create, then TDD makes sense.
The problem with TDD when the above is not true, is that you spend time writing for something that actually isn't needed
I think TDD forces you to write code that is easily testable.
I've seen a lot of code written without testing in mind, which made it very hard to write good tests for it.
That’s something new to me. But no one expressed such idea from both sides.
Definitely true.
Dependency injection and SOLID principles is enough to get testable code without writing tests first.
@@streettrialsandstuff It's not, really.
@@streettrialsandstuff that's only for being able to use test doubles. For making sure that your test is simple and relatively short, you have to write it together with the production code, otherwise you neglect your test code OR are a god.
Prime... I think a unit is meant to be a public function, or similar. A private is an implementation detail. If you think this way, then you can refactor any way you want, including breaking up into private functions or even a class. Doing this means you won't necessarily have to specifically test the newly created class, but of course you can.
Exactly; a public function (for unit / integration tests) or a public feature (for functional tests). The tests you write should help you refactor; not slow you down.
I'm still in school and they don't really teach us about testing in class, so I don't have much knowledge or experience, but from my intuition this is what I assumed would be the case, so I've always been confused whenever Prime starts talking about testing and this sort of topic comes up.
A unit is publicly used member of a module.
If it was a function test we would have called it a function test.
Based on your statement, I think it is fair to say the unit, aka public function or like, has one or more supporting private functions/methods? So, if one private function involves complex logic, how do we test the private function independently?
So, unit tests are some kind of integration tests as we test private functions through a proxy: public function?
I'm doing TDD for 4 years and I love it. I absolutely love it. In process of adopting TDD into my workflow I was so frustrated. I dropped it 3 or 4 times before finally adopting it. There are a lot of gotchas and you are better getting a mentor to resolve the confusion instead of stumbling onto those by yourself.
You should know that TDD doesn't produce good designs only mediocre. To get to good designs you should redesign after every large increase of code size. Beside many "small" unit tests created by TDD are just useless and you should delete them.
@@tongobong1 I agree with with everything except last sentence.
@@markonovakovic3838 The most overlooked advice about how to properly do the TDD is advice on deleting useless tests and I believe this is the main reason why TDD is just not that great in practice.
Look what inventor of TDD Kent Beck did in his famous book. He deleted the testFrancMultiplication test because he wrote larger tests that fully covered the functionality of this smaller test. There is no proper TDD without deleting useless tests.
What does it mean if all tests are green?
The only correct answer is "Nothing" it means literally nothing at all. It doesn't mean your code is good, or clean, or safe, scalabale/maintainable/whatever--none of this. It only mean it satisfying the current state of tests which itself is a subject to change.
one thing i would say for everyone who, like me, finds TDD impractical, is that compiler errors are tests. if you're writing in a compiler that flags warnings as errors, those are tests.
TDD is good, you just have to write your code first in a scratch pad so it doesn't count and then write your test and copy your code to it.
Also, RE: skill issue and/or it takes time to "click" and/or "you can't get it if someone simply forces you", when I started introducing TDD to my non-backend-lovers juniors they absolutely loved it almost instantly (beyond admittedly a bit of initial "seems tedious") ; the boost in confidence they get from not only testing their code but being relatively certain (since the test failed initially) that their test actually exercises their implementation (... and that they didn't break anything else) is massive!
Honestly I find it hard to even get people to write tests at all, if they have never written them before. There always seems to be a period of "why am I writing the code twice" or "this is just for a code coverage stat" before it really clicks that they should be testing for the behaviours that they want, rather than testing the code does what the code does. Once that clicks, writing tests is great, but I think it does take a bit of time before it clicks, and then TDD is even more difficult because you need to completely change how you write code, and if you've been writing code for a long time, that's hard to do.
Front end TDD is the biggest ball ache though. We’re developing an app and are forced to do TDD. I’ve been doing it for a year now and it’s just a massive chain having to write the tests up front. Frequently we get feedback from marketing and users that they don’t like something when we initially send something through and have to completely change it.
Also UI changes before feature deliveries are fairly common so all that work initially on writing those tests is a massive waste of time. Literally wanna tear my bloody hair out doing TDD for ui that can change frequently. Love testing but seriously hate TDD. I think it’s a waste of time to write your tests up front.
@@tanotive6182 Yeah generally speaking I don't think I'll ever try to test frontend code ; that's a job for the QA team, aside from some very punctual (extremely rare) complex logic I might wanna unit test to get right.
EDIT: OTOH I completely disagree with you about blaming TDD on this ; the blame falls squarely on TDD misuse - I don't really believe in testing frontend automatically as I said. (EDIT2: An extremely basic reason for this is that most UI testing tools will be able to click a 0.25x0.25 pixel button that would be entirely impossible for a human to do... I don't see the point)
@@tanotive6182it sounds to me like you're testing the UI at too low of a level, likely testing implementation rather than behavior (which is a common problem).
For one, it sounds like your team has a problem with knowing what to actually build. The UI should usually be the first thing understood since it's where the testing should start whether it's TDD or not.
Second, I'm struggling to see how the change is the core problem even if you're testing UI implementation. If you change from radio buttons to some kind of fancy collapsible selection, the level of abstraction should be the same so in the test you change "selectRadio(element)" to "selectFancy(element)".
Unless you're talking about the difficulty of implementing "selectFancy()" compared to the radio version, I don't see how this is a testing problem, much less TDD. It sounds like a misuse problem, like others mentioned.
@@georgehelyarI can't find the article, but I remember hearing this referred to as "flipping the bit" in your brain, regarding testing
This is exactly why I always have 2 kinds of branches. 1 is for exploratory, experimental or prototypical stage (exp). Another for production code. These 2 are separate and have different goals but ultimately the end game is to deliver correct code.
The exp branch is built in a fail fast manner to explore the problem space. It also includes exploration on how to implement potential solutions and how to test them. Sometimes we don't even know how to to build the test harness so this is the time to explore how one would go about it. The exp doesn't have to solve the problem to the end, it just has to establish the framework for means of solving the problem and the proof of concept on how tests might be written. Once a certain level of confidence has been established and presented to the group then we proceed to the next phase. The output of exp is then rewritten test first with tdd in mind in the production branches. The tests can be made in bulk at this time and one after another the implementation will follow. What's even funny is that sometimes QA team who works on test suites also have designed so many tests already that a separate team writes these tests in to code just to catch up with the amount of tests while implementation is ongoing. They are peer reviewed according to production quality standards.
Sometimes the problem space is well known enough and the means to create test harness for them is well established don't need exp branches and can be worked on directly in tdd.
So I would say we do xtdd approach. explore experiment or prototype when needed then do tdd for production code. One can then iterate features on production quality branches given the dev understands the problem space and the means to test well enough otherwise they need to explore first.
For me I think it's about being practical and to actually give people the chance to understand before requiring rigid standards.
I don’t think I’ve ever had even 1/4 of the time for a feature that you seem to have to be able to do that.
@@suede__ I totally agree with you sometimes projects do not have enough budget money, time or human resources and In those cases people left right and center will try to cut corners where they can to deliver something. How the quality is for the something with corners cut all over the place is another story. Sometimes you can get lucky and nothing bad happens but sometimes we can get unlucky and wrong dosage medicine is mixed killing patient, failed mishandled financial transactions, planes crashing, random acceleration in cars, gas pipelines pressure valves randomly closing or electrical grid failures. 😅 If you're working on another todo list then the consequences may not be as dire.
Now get ready for Dave to get medieval on your arse about how how this goes against another one of his religions - trunk-based development 😂
Tests must be about the interface, how you use the thing, not the inner workings, how the thing works.
If you change the interface, you change the tests, if the interface does not change, the tests must not need change.
If they need to change, it means the tests were bad in the first place.
That's right. Although the tests can also be added to when fixing a bug, it doesn't have to be a bad test.
@@anarchoyeasty3908 There is a case where a bug could mean a missing test, but then you keep the test. But adding a test is not the same as changing a test.
I like to think of tests and requirements as two sides of a coin. Each requirement _has_ to have an associated test (even if that's just an ad hoc demonstration of functionality). For the most important requirements, you create regression tests that can be run before each release. If the requirement changes, you need to change the test. You try your hardest using other techniques to make the tests so they don't care about the implementation -- only the requirement. In that sense I like the idea of TDD because it acknowledges that all you need to write tests is the requirements, and if you write the implementation first your tests are more likely to be contrived nightmares that test things that don't matter. That being said, it is often impractical to fully write tests before you start the implementation, but I think it's always good to keep in mind that they need to tie back to the requirements.
Your implementation is a requirement, isn't it? Otherwise you wouldn't have any implementation 🤔
It would be really cool to see you and Dave debate this on either of your channels.
I would love to see ThePrime debate Dave on mockist style unit tests. Mockist style unit tests are the biggest nonsense ever invented and quite often used on many projects.
I think the strongest argument for TDD is that it aligns with how other engineering disciplines will create simulations and tests that ensure correct design before they begin construction. Of course software has more flexibility post-construction than other fields, but it still seems to point to its usefulness in principle
Except it doesn't, you simulate post initial design and iterate over it. TDD expects you to already know the tests before you code, anyone who says otherwise probably never read Kent Beck's works (daddy of TDD)
@@marcs9451 I don't believe TDD expects you to know all the tests before you code. From what I understand, you start with one failing test, get it to pass, then refactor. Then add another test, and so on.
other engineering disciplines have one big advantage over software development though (normally at least): they know exactly what they need to design and their requirements don't change stupidly often
@@kuhluhOG That's a good point. And our software often integrates with other software that also is changing a lot, so the interfaces are constantly changing.
Arguments via analogy sometimes make intuitive sense, but often fall apart in areas where the analogy doesn't make sense.
"I'm doing it right and every one else is stupid."
I find you necessarily need to write some code up front when there’s areas that you don’t quite understand. I call it discovery. Then I move into more formal design. There’s a habit in Python of thinking that public/private methods don’t matter, but I find they make it very obvious what needs to be tested and what shouldn’t be.
Once my interface is designed, I write tests - no, they won’t always be perfect first time, but stopping to think about them does help me write better code I think. The two major keys are only test interface, not implementation and avoid mocking entirely if you can.
Ian Cooper did an absolutely fantastic talk on TDD and I’ve found since watching it that my tests are far better and are far less likely to break as I change things. If you purely test the interface, then nothing should break unless you break the interface.
However it can be a battle getting other devs to not test certain things that they deem important and so when working on shared code you can end up really fighting the tests every time you change something
Yea, I typically end up rewriting code right away two or three times within a few minutes. Not just the the parts that would be inside the black box either but large swaths. TDD does make the first iteration better, but that’s really just polishing a turd. It also increases the cost of the subsequent iterations to the point where they often don’t get done which takes far more quality out of your code.
Did anyone else reach a point where they started writing unit tests because it was less tedious than reading console output? That was 100% what got me into testing.
well, you could use the debugger...
@@thekwoka4707 for some reason, C/C++ are the only languages I ever debug. Not that it makes sense at all, but for some reason it just seems easier to set up tests in other languages (whereas unit testing in C/C++ is more tedious in my experience).
The nice thing about tests too is that they stick around, whereas a debugger session is one time only. So its nice to be able to run my tests again if I make another change as opposed to having to step through the code again.
The real problem with TDD for me is that I don't know what I want to do before I start looking at a prototype of it so it's more like a prototype driven development. The problem here is he wants you to go and really think in the architecture of the thing before you build it. But I find impossible with so many moving parts and alternatives on today's production environments. Makes mucho more sense to build small prototypes and iterate over them.
GODDD you're so right that it IS the Rust argument ; I've been making it myself for years without realizing. "it's a bit costly upfront, gonna be a bit slower in the beginning, gonna have to change the way you think about things, but it TRULY pays off in that writing correct programs means less time spent debugging/fixing them". Very astute observation!!!
EDIT: I hadn't even reached the part about "less bad habits to unlearn", this is a priceless analogy for sure!
You could say this is an argument for anything difficult, but arguably worth learning. Same argument people make for vim, changing keyboard layouts, etc
The argument is sound for Rust. It can be annoying but it helps you in the long run... by literally preventing these types of bugs to happen. TDD doesn't do that. TDD only PROMISES that when you use it, it will help you, with the caveat that if it won't, you are doing it wrong...
I mean, you could easily argue, that RUST "unit tests" your code. It tests your inputs and outputs, it tests that you use all code paths, that you don't ignore errors and so on... but it does it for you. Automagically.
Great videos! We live in a crazy world where critical thinking like this is rare. Never trust anyone or anything that only lists the pros without the cons.
I believe TDD is good BUT only in some specific cases like when you know exactly the inputs and outputs to a rather complex function. In many cases where you don't know what exactly you want and so you are exploring the possibilities or when output is random or when building GUI in code or when applying a boilerplate code you shouldn't waste your time on a silly unit test because usually there is no benefit of having such a test and even less of writing it before the production code.
Lately there's been a lot of criticisms about Uncle Bob's predicaments. TDD, Clean code... all being rediscussed. That is interesting because in my country Clean Code, Agile and the likes are hot. Looking what's happening here is like having a look at what things will be in here in two or three years. Sometimes even 5.
Clean code is one of Bobs weakest contributions, though SOLID is a useful framework for thinking about design even in functional programming. Hopefully in your country Agile won't be ruined by certifications and consultants that turn it into waterfall with more ceremony
His argument ignores the fact that sometimes the goal is very well defined loosely, but there are no limits on the specifics. For example, when cleaning user data. The goal is to strip out any malicious code, but there is no limit on what that input could look like. So no matter how many tests you write, the coverage still approaches 0%.
Totally would have missed the "if you can love at all" and now I can't stop laughing LOL
So ironic to hear Dave Farley complain about people saying “it doesn’t work.” He does the same thing; he very often says “it doesn’t work” about stuff that real world companies use ubiquitously
"No matter what, you aren't good enough" future AI
I was on a team that had gotten Agile development to work, but it isn’t enough for the team to go through the process, they have to believe in it. That is why I have only had one team that made it work, all other teams didn’t make the effort to dedicate time to the process. The biggest mistake has been the extremely groomed backlog and having a ratio of technical debt stories to feature stories, without that as a base, it is impossible for it to succeed. On this team the whole team was required to join the refinement meeting and we needed everyone understand the ask, even the QA team, and give their story points with justifications for why it would take that time.
So, if you have a team doing this process very well, TDD won’t be as hard. My experience is poor user stories cause more rework than not following TDD. I have to deal with missed requirements all the time, because people using the system doesn’t know how to look for edge cases until the work had already started.
“The team has to believe in the process” confront that to the part of the agile manifesto that says “Individuals and interactions over processes and tools”.
There is a big misunderstanding in there somewhere.
@ What I am trying to say is, the process has to feel natural and not clinical. When done correctly, it feels like an extension to writing code and not checkboxes that must be completed to release. If you don’t feel that it’s guiding your work then you need to bring it up with the team for discussion, maybe you tweak your team’s process until it feels right. The fact remains that all teams have a process to follow, the goal is to make the least abrasive process.
I literally burst out laughing when said "TDD is used to develop some of the best software in the world" followed by showing a picture of a Tesla Bugmobile!!
I love the workflow of writing a test first, then the implementation. It forces you to think about the requirements of the unit you're writing, making implementing it easier, usually. Also, if you trust the fact that you wrote a good test, then it's also fairly easy to know when you're done.
What I really DON'T like about TDD when you take it really literally is the fact that you're supposed to write just enough code to make you test(s) pass. And if you KNOW that, for instance, just returning "true" is not going to cut it, then you're supposed to write another test and then make that one pass again. It's extremely tedious and way too many iterations. It's dumb. It's stupid. I just write a few tests beforehand, checking the happy flow and some boundary cases or errors and then implement it in one go untill all tests pass and then I move on. Way better IMHO.
I think one of the issues with the "write test first" thing is that you will learn a lot about what it needs to do, when you try to do it...
I've yet to see anybody explaining how TDD is supposed to work for UI development. Am I supposed to compare the image output to Figma screenshots?
I love that he claims this applies to game dev. “Instead of iterating on the design of your game, you should just design the entire game up front, write an test suite describing the entire game, and then the rest is just an implementation detail.”
TDD = waterfall, change my mind
In TDD you don’t write ALL your tests to front in a test suite. You only write one.
This is the perfect comment that sums up how I feel about primes audience, even though I am a member of it. So many people on the chat go off confidently about things they are extremely wrong about.
TDD is a tight loop. You write A test, you implement the test, you write the next test. It is a mindshift change where before you implement the functionality, you think first about what the outcome is. So if I am writing functionality where a move_unit command when given a new position and a entity, instead of jumping in immediately and implementing it I first think about what the desired outcome of this command is. For the sake of simplicity let's assume that this is a teleport/grid style move instead of a smooth one with physics, but you can do this with more complex logic too. Without knowing every little detail about how this will eventually work, what's one thing I can confidently say should happen. The position of the entity should be updated to the new position. Great, that's a test. In my test suite I create a new unit test and name it whatever. I prefer verbosity so that it acts as documentation when read. MoveUnit_Should_Update_A_Entities_Position. In that test I create a entity, I perform a move_unit command, and I check and verify that the position of the entity was updated. It fails. Because you haven't written that logic yet. Then you go into your game code and you implement that little amount of logic. Now your test passes.
Great, now we continue and decide that entity's should have a range that they can move in. Let's call it 3 tiles. Let's go back to the test we wrote, and update it to reflect the new development. Rename the title to MoveUnit_Should_Update_A_Entities_Position_Within_Range and make sure the new position you passed in is within 3 tiles. You haven't changed your code so it should still pass. You are simply updating the conditions to reflect the new intent. Let's think about what we want to do about what to do if it is outside of the range. For the sake of simplicity again lets assume we do not perform the move and instead return an error. But you can do this process with any complexity of logic. So let's name our test MoveUnit_Should_Return_OutOfRange_Error_When_Given_Position_Too_Far (again you can name things however you / your team likes. This is just how I like to write them because when I read the titles of my tests it describes perfectly the desired functionality of the code. It serves as documentation). In the test I create an entity, I provide a new position that is outside of the range, and verify that it returns the error. It fails. Because we haven't developed that code yet. Hop into code, add your invariant check and say if the range is too far, return the error. Now your test passes. Does the first test still pass? Great! You have confidence that your refactor did not break anything that you had written before. Does the first test fail? Great! Now you know before you ship it / get further in development.
You can continue this process the entire process of programming a game. It is a slow process at first, breaking your usual flow of development. But it took me 3 days of doing this in my work to get in the flow of doing it. As you are developing you will come up with new requirements that break assumptions you made earlier in your tests. That's fine, your tests are not concrete they are a reflection of your intent. So go in, change the tests to reflect your new intent, then update your code and verify that the changes all pass. You don't need to update your tests when you change an entities range while tweaking the data files for your game. But when your underlying systems need to change (or be developed) you definitely can (and I believe should) do TDD
It's unit testing. You test a unit, not the whole game.
Implement the behavior in an iterative way, and you likely will get an excellent design. You're right in that design upfront is not what you want and is one of the things TDD allows you to avoid. While TDD might not naturally have a test suite for some aspects of the game that deal with hardware input/output, you can focus instead on testing the design you make in the programming language you program in.
For graphics and the such, if there's a specific behavior you're testing for, you'd have to have sympathy with the lower-level components that the code might work with in order to understand how to test it. This really doesn't apply to a lot of game developers, so there's likely much less coverage on how to handle tests for such scenarios. It is a great indicator of buggy code when you do not know how to create tests for it, i.e. the code is not easily testable.
you do not write all tests up front you just write one interaction further in tests than you are in code, so you would normally just make sure your layers for example physics work independent of your player logic with unit tests so you can safely layer on top.
@@simitron1 That's a great example of TDD. That kind of abstraction can come naturally in TDD by separating your components like that nicely :)
i feel like a lot of his arguments shame people by basically saying "oh you don't like TDD? I guess you suck at your job then"
Yeah, just your average toxic gaslighting tech evangelist
I have tried TDD once and just like I heard in a blog before about TDD, a really big irritation moment for me is the fact that when writing the test code first before the main code I then get no help at all from say intellisense but it instead throws a ton of errors at me cause I try to call undefined functions and variables that i have not yet written in the main code and that the test code is trying to access. You are more or less therefore all on your own writing the test part and all the error messages makes it confusing and imposible to really see if you have even written the test code correct when all u see is errors. For simpler tests it might work fine but for more complicated test It will cause issues for sure.
Part of successfully writing a failing unit test is having your code compile. Step 0 in a way. People shouldn't really write much code that doesn't compile. I think that's an IDE issue, I don't have this problem with Jetbrains IDEs.
To me it sounds like you need to iterate even smaller than you thought, and do it painfully, until you kind of get a Eureka moment as far as figuring out the sweet spot of writing your next feature without doing too large of a step.
@@antdok9573 I just might need to give it some more tries maybe. No expert yet in TDD. As said I only tried it once and as they described it in the blog more or less. I however also might be a bit spoiled with intellisensee in that even if I know how to write the code in my head it is still a confirm from the computer that my code is correct and that I am on the right track so when its not working as expected, like when I wrote the test back then, I get a bit put off. It might also just be an IDE / code editor thing as u said.
@@johnpekkala6941 I rely on my PyCharm intellisense to suggest implementing missing functions/classes if they're not implemented in my unit tests. It will also just flat-out implement nulls/temporary values so that tests fail successfully.
Yeah, up to you if you want to try TDD. I'm no expert, either, but I have reaped its benefits already. I don't have much experience implementing it in an existing codebase in an efficient manner quite yet. That's most certainly pro-level stuff if you want to become very quick/experienced at it.
SNL: What is Love. Don't hurt me know more.
Haven't watched yet; answer is clearly "yes"
I do TDD at my current job only because it's required. That being said, one time I do find it very useful is when I'm fixing a bug. Write a test to replicate the bug and then fix the code to make it work properly. This ensures that the bug will not return if someone later on chages something that would reintroduce the bug. My butt is covered "Hey if already fixed it and here is the test in the git history"
just watching it, when he mentioned the Devonshire report, he refers to the DORA Devops report. As stats or data based on surveys are to be taken with some salt, it does bring some data to backup his claims.
Also, I think it is not understood (the blue part). Or the "you can't write bug free code", that's not really the point. But doing late testing ends too often in the situation you described during this part of the video.
All this stuff about him talking about not being able to test an interface that has multiple responsibilities is sort of a symptom of how TDD is not being followed correctly.
I don't like the subjectivity of his "take" on TDD. Doing TDD with code that already exists is a greater challenge, but it works out if done iteratively in small steps. Many things in TDD can be subjective, since the cycle itself is pretty vague (how much refactoring? is my next failing test a new feature or a demonstration of an existing bug? etc). That said, there are some pretty clear rules as far as unit testing and the such that help clear up that confusion anyway.
The same people that sold us tdd at least in a unit test level are those who sold us electric cars in a coal state
the thing of TDD is that it is not a golden rule. It is like writing books or novels, there are maybe some greatest authors tell you that you can first make the outline and all the plot-line to make your work easy or greater. But there are always different ways to for a novel to be great. So TDD is always a method, but should not be considered to be a rule for programming.
Pretty much anything you make into an unbreakable golden rule will become a pain in the ass.
thunderdome driven development == two arguments go in, one result comes out. make the result optional if both arguments perform a doublekill
In my experience, yeah, TDD sometimes fails and you need to throw everything out, but, those times are exceptions, not the rule. I think it is not fair to take a exception that not work, to say "this not work". I agree with "you haven´t done enough". Yeah, sometimes you right code just to see what happen and thats fine, but, once you figure it out, now you should know what you need to do, there for, you know what test you should create. I will say, sometimes I hate doing TDD, but saying "i hate doing this" and "this do not work" are not the same thing.
“It feels slower, but it isn’t.” Writing tests + writing code > writing code. TDD is objectively slower.
The problem is not writing test but how you write it, i am working on a big java project right now and running a test take at least 2 min on my computer (yup). So running a test 30 times ,booms, 1 hour gone. I love writing tests, but i just wanna get the job done and i have zero patience for this BS.
Absolute bs
@@Notoriousjunior374 ?
@@oumardicko5593 tdd is absolute bs
Having worked on systems in "hard requirements" engineering, TDD works very well. After all, the laws of physics don't change on Tuesday. However, prototyping business systems and doing rapid delivery is antithetical to TDD approaches. I *have* had the entire business change focus on Tuesdays. Tests are great, when you build them depends on the level of chaos the requirements are in.
this video is cut i guess prime is still ranting to this day.
legend goes he is still ranting as we speak
"A wicked Problem" - is a problem you have to solve, before you can solve it. Software in a nutshell.
Martin Fowler has written about the saying, "If it hurts, do it more often" as it pertains to activities like deployments and integration. The same applies to TDD.
TDD doesn't fix many of the issues that people say it does though. Code coverage is a worthless metric.
@@NathanHedglin Code coverage is not TDD.
"How do you know when you're ready to write code?". -> Exploratory test (inline code logic in the test itself). Once green you can refactor and at that stage you started writing code (extracting into classes etc).
If it improved code quality I would be happy to do it. Everywhere I have worked where TDD is the norm has had some of the worst code quality I have ever seen.
One companies was so bad I chose to quit rather than stay and have to sort it out.
TDD is in practice just a heuristic or andragogical approach. It is training wheels for people that struggle to think in code and for anyone with even a modicum of an acumen the benefits are non existent and the detriments are ubiquitous.
I agree that tests should be as large as possible, to decouple them from the implementation details.
But the flipside is that when I'm building up a complex system, I need to be able to verify that each component is working correctly. If I only have one end-to-end test and it fails, how am I supposed to know which of the 30 functions involved is causing the error?
Personally I struggle to see any way to build up a conplex system other than breaking it down into components and building each component one at a time, checking my work at each step. And to check my work I... Test it.
So honestly I find myself doing TDD either way. The only difference is whether I keep the tests around or whether I chuck them out. But a lot of the time I prefer to eekp them around. Perhaps refactor them to be more abstract and generic.
And yes, when I change a low-level interface, that may break a lot of tests. But then again I need to test the new implementation anyway, so as to make sure it has all the behaviours I expect it to. So I may as well do so by fixing the broken tests?
tomato driven development
🍅
Mr. Pomodoro would like to have a word with you.
I’ve realized that the arguments against TDD only exist on twitter or UA-cam comments, but not in real life.
None of you actually think companies building anything world-class don’t write tests. You think stripe or even Netflix don’t follow TDD? Imagine a world where what prime says is true, where people just start coding with no pre-written test or specification diagrams to code against. That sounds like a mad world imo. No way I get into a plane where the devs are like “you know, I don’t really believe in tests, I just code because specifications may change anyway”. Come on
I think the only way to get TDD to work is to sandwich vertical slice architecture between two fat layers whos interfaces never change and then your units start and end with the top and bottom layers. This way the fat layers will be rigid as not to require changes to our tests during refactoring, but the middle layers will be flexible to change so we can make them as simple or complex as necessary to accomplish the goal. The problem again is that the fat layers need to be perfectly designed and unchanging, so if it's not a type of application you've made before, then you likely will not know what those interfaces will be, thus the problem returns.
EDIT: Yes this is more similar to BDD or integration tests, as Prime said, and yes the entire IO layer will need to be faked/dummied/mocked, so it'll probably be a third fat layer layer that is tested outside of TDD. This means that if all your program does is take some input and save it to a file then your unit test basically tests nothing because there's no business or domain logic in the middle. It would just be a bread sandwich, no BLT.
I think the only way to get TDD working is to think of TDD as a "design by test first." Otherwise, yeah, you do end up in a vicious cycle. There's tons more to TDD than just that, but at its core, Dave wants us to think about developing behaviors (less so testing individual functions, as Primagen states) for our code based on the tests we're creating. I re-call him wanting to rename it to Test Driven Design.
It's ok to break out of the cycle of TDD to do exploratory code when you actually have zero clue of how to write the unit test as he mentions at 6:00.
Pk😢re 0:22 o ko😊 0:22 u
Pk😢re 0:22 o ko😊 0:22 u u😮 0:22 ih😮 0:22
0:22
BDD started as an idea on how to teach people to do TDD properly. Only after some time it got overrun by tools and technologies, as most good ideas in SE do.
I try to write declarative code as much as possible, and logic (aka spaghetti) as little as possible, because then compile time checks like type systems and such don't let me compile wrong code. The code turns out very testable, but often the tests would just replicate 1:1 the resulting declarative code in some shape or another, which is not very useful, because at that point you can just look at the code to confirm that it is correct, without tests.
If you write declarative code, your code turns into these bricks, that you can replace, reorder (in some cases), move out into functions or own libs (makes it easier to open source), they are easier to read, to write, to edit (I often come back to my old spaghetti logic and can barely understand it, I often come back to my declarative code and can pretty much continue where I left off).
Builder/Stream (Iterator) APIs, patterns/algebraic types, aggregation (components in many frameworks, Leptos' , etc), sending messages (reducers, channels, etc) are all your friends.
The fact that most loops can be replaced by streams also makes me think that they are pretty much declarative in their own right, just written slightly differently.
When you do your types, give them more meaning, make their meaning be your logic - instead of checking for nulls, numbers in ranges, etc. make your functions accept only correct inputs already. In the same fashion, the most correct return type would also limit possibility of wrong outputs (since function logic has to match output type), as well as make sure units glue together better. Often your logic that was like `assert check_a(value); assert check_b(value); ...do_something_to_value(value)...` would turn into `A a = a(value); B b = b(a); ...do_something_to_b(b)...`.
As an example, function that outputs Days and function that takes in Seconds would need a conversion in between them; function that outputs u64 and function that takes in u64 are prone to rather silly mistakes.
Bonus example, currently on implicit-clone crate of yew stack, I was thinking whether we should be adding methods like `.push(item)` to our immutable arrays that would clone before adding an item. Some sort of result type that tells user that the array has been cloned and they should consider the ramifications (mainly performance) of that seems reasonable. Rust API does this with `[T].repeat(n)`, which returns `Vec` rather than another `[T]`.
The only time I do some kind of TDD is when a problem is very well specified in advance, something like writing/porting a data structure or hash function. Other than that I find it slows everything down, especially at the start of a project where many things are not even decided/specified yet or the customers change their mind very often.
That's not really TDD if the code already exists in some form... That's just a test harness to ensure your new code behaves like the old. Your assessment regarding new projects is 100% accurate.
I don't hate TDD. But I hate TDD when:
1. It is used as a replacement for a type system
2. It becomes a religion
TDD does nothing to help with overall system design. If the design is bad, you will find TDD tests that are bad squared.
I need to write a TDD framework for my TDD framework
All I took away from the TDD argument is that if I don't like snowboarding, it's just because I didn't snowboard enough. So the beatings will continue until morale improves?
One problem with TDD is that during refactoring, you may want to change the interface too because, well, you came up with a better interface. But this means you have to rewrite all your tests. This is not only expensive, but it also means that you may introduce a bug into your tests, which means your tests don't provide the sort of 'safety net' during refactoring TDD promises to provide.
I mean, at the moment where you are refactoring you are risking to introduce some bugs. And yet you want to be able to refactor because your code is bound to be tech debt a few years from now.
And the less you have to change the test for that refactoring, the better. For a specific piece of code, the further you are testing, the less likely you are to have change the test while refactoring, because it won't be bound to implementation. The only way to have change-free tests, is to not write them in the first place
31:20 The thing about a TODO test is that you don't disable it, and it fails when you run the test suite. So then if you forget to get around to fixing it, you get a reminder every time you run the tests, and anyone else who runs your tests can see that it's unfinished (and maybe they will decide to fix it). Then if the code breaks, you have a very good idea of where to start looking for bugs.
TDD works if you just test the highest level API.
Often that means writing a test for a service or controller only. Somehow everyone got the idea that good TDD is about testing every function. It's the opposite. You just translate the high level use case (e.g. delete a record) into a test, and that's the only test you need to write. You don't need to write tests all the way down the stack.
As a side note, IMO a unit is a unit of functionality.
It sounds like you are describing the highest level API as in what the consumer uses. And that is not at all when TDD works best. TDD serves as documentations of intent of your code. If your code has a class that has logic in it, it should be tested as part of TDD. You only need to test and document the public functions. How the code get's there (the implementation and private functions) is it's own business. But your TDD tests should cover the public api of the class. When Method A is given parameters B and C result D should occur. Or Error F should be thrown. That way when something breaks because of a change in the code either in this function or in a function that this function relies on your tests will fail exactly where the error occurs and you can fix it.
That's integration tests, a unit is a single piece of functionality. With a web app and CRUD, there's tons of layers between the request and the database getting a record entered into it or removed.
A unit of behavior. A unit test is still small generally though, so you probably don't want to go up to the API level. That's integration testing. Hardware input/output shouldn't really happen in many of these test cases.
The thing I hate about TDD is just that it is a *lot* more work. Not only do I need to code the thing, but I also need to code a thing that defines what the thing should do given a certain set of circumstances.
plus, some tests are just objectively harder to code than others. Testing if a function added two numbers together is WAY easier than testing to make sure resizing the browser window moved the elements to the exact spots they are supposed to be. Sometimes you have to do a lot of extra work just to even test the damn thing did what it is supposed to, sometime it can take many times more lines of code just to check all the edge cases. Plus, you have to test all the fail states failed in the way that you want the code to fail. It is a lot if you want your tests to be any good later when you are making changes to your code and the tests tell you exactly where and what you messed up. Otherwise, what are you doing all this extra code writing for?
so you hate longterm perspective cuse more work? Not to mention that you seem to hate not just TDD, but tests in general. And with that mindset, either you will those tests now, someone will cry in the future. Hopefully you, cuse if you're the kind of person that due to laziness creates more workload for others in the future, then I don't like you
@Mish844 im not strictly saying that tests are all bad, my comment mostly laments that driving your development on tests makes making the product very significantly more labor.
Plus, since garbage in garbage out, if your tests are wrong at any point, they can lead to you either making the wrong product, or you can end up with more surface area to write in mistakes.
With TDD, the whole point is that your tests are supposed to drive the development, so it is not much of a stretch to say that the tests describe the thing you are building.
But if you are adding a feature and an older test fails, without any other context, you cannot know if the assumptions made when that test was written are still valid in the face of the changes that you made. So if you change the test to meet your new assumptions, you are now violating the description of the product that test was supposed to represent which theoretically undermines the point of having written the test in the first place.
If you rigidly adhere to the test, you may not be able to add the feature, or you may need to integrate it in some convoluted way which can lead to greater code complexity and/or poor performance.
These are the negative side effects that you get from TDD. Done well though, TDD gives you peace of mind that you didn't catastrophically violate the expectations of some other code that is well distanced from yours.
Not all projects and teams are suitable for TDD. There is a very real cost to using it and it is very easy to use it wrong.
I'm not saying "don't", but it isn't a magic solution. It takes work and discipline from everyone involved.
A point of praise for TDD is making things more testable because it'll naturally make you write more functions/more classes so that you can test it. But imo you've sort of just forced yourself to break up something which was naturally 1 unit into a bunch of non-reusable puzzle pieces
Yes, very true. While poorly designed code is often hard to test. Just because code is designed to be testable doesn't make it well designed. Often time designing for a test makes it harder to understand and more complex.
@@username7763 and then there are a shit ton of depencies everywhere are the set up for tests are a nightmare. Here comes mocking....
I would argue that imposing the requirement that things must be very testable actually makes it more likely that the code becomes poorly designed. While a lot of good designs happen to be very testable, counting those are actually a bit of a moot point, because the programmer who was capable of finding those good designs would usually have used them regardless of that requirement, for the simple reason that they are good in the first place, so imposing the requirement of high testability does not actually change much in terms of more use of those good designs (aside from serving as one of the reasons why those designs are good in the first place). What we instead get are those cases that have varying degree of how well designed they are, which then gets imposed an additional requirement on how the design should be. Since such things, especially those of less than great designs, makes it harder to solve the same problem because of the complexity from how to handle the new requirement, and as a rule of thumb design quality is inversely proportional to how much over their heads the designer is, and more complexity can certainly cause this.
It should be noted that sometimes the most appropriate thing is not to do classical automated testing for something, but instead something else. This is especially important when the main source of failure is conflicts with forreign interfaces, which classical automated testing would often need to mock out. An example of this would be testing interaction with persistent systems that you do not want to damage, though some of these can be handled with special testing enviroments, but these enviroments are often way too heavy for the kind of automated testing needed for TDD. That also brings us to another point, which is that while it technically possible to test a lot of weird things like those mentioned in a video game, those things often require a lot of skill and effort to set such tests up for, often far in excess of their value, and TDD is definitely not enough of a miracle that it is worth doing at any cost.
With TDD, you are supposed to test at a pretty high level. If you find that TDD requires you to change how you structure your implementation, you are probably testing too low. (Yes, many tutorials get this wrong, too!)
This entire video is just him saying “it works, trust me. You’re wrong!!” Over and over lol
I am surprised this guy didn't say we should write tests before speaking to the client
All I was hearing throughout all this thing was "Use this, if you don't you're a dumbass"
I think Dave's definition of a unit test scales a lot more than Primes... Tests for CRUD are just as valid as tests on functions. I think the only rule is that the tests should be small, so you would me more likely to use stubs rather than real services where services reach out elsewhere.
Why the hell do you Unit test CRUD ?
@@sanjayidpuganti I have a published interface spec and don't want to have to spin up an entire test environment to test my implementation. Each endpoint is small and simple to test - so why not... Integration tests would probably need artificial inputs to touch all code paths right now.
I was interviewing for a backend position once, when the lead tester for the project (30 people) asked me if I do TDD. "Sure", I said. Then I laughed, and then the tech lead laughed, and then the team lead, and then the test lead himself laughed, and then the project manager laughed, and then the HR representative laughed.
I got the job.
I think you nailed it at the start Prime.
If your units are tiny, the value is negligible.
Tdd seems to work best when you approach it like mocked integration tests.
Test everything through the stack, from fake input, to fake output. Then you can refactor to hell in the middle without affecting your public api.
If you apply it to the level of a single function you couple tests to too small of a unit, which means refactoring screws up the tests. You actually write most of your shell scripts this way, give it input, run it, check the output.
But there are also certainly places where tdd does not make sense.
Lots of small abstraction packages just don't make sense to develop using tdd.
Thank you! I thought unit tests were just that they didn't cross a port.
Unit tests that are too tight are bad TDD to me. It's why BDD came about.
To be fair though, I am not a fan of TDD.
@@robertluong3024 I feel like I'm not really understanding something. How can you have a test that's too tight? Or are you testing private methods?
@@Tkdestroyer1At a single function level you it's very difficult to differentiate between implementation and behaviour.
Too tight basically means the code under test is so simple that it only had a single behaviour, and if you want to move that behaviour, you have to update the test.
Think unit testing a class, then deciding that class needs to be split into two, now all the tests and setup and teardown needs to be updated.
If you instead target your tests at the public api of your system, you can split and recombine the individual classes underneath as much as you want and nothing needs to change.
Integration tests is a terrible name, cause to some people it means spin up the dB and the Internet and test for real, and to others it means test the integration between the internal modules but still isolated from io dependencies.
That second one is the golden path for Tdd.
FFOD: Figure it the Fuck Out Development. 🤪🤣
I reason about it like this:
1) You can't write code until you have a complete, rigorous and unambiguous Detailed Design Specification (DDS). That is detail down to the "unit", define it as you will, function, module, class, Down to the API, function parameters etc. Whatever.
2) You can't write tests until you have a complete, rigorous and unambiguous DDS. As above.
3) It follows then that it does not matter if you write the tests first or the code. Because they both derive from the same document. At the end of the day they have to match up but they do not depend on each other.
4) Tests should be written by different people that those who write the code. Thus greatly reducing the chance that bake in the same misunderstandings of the requirements/design.
5) For extra reliability tests and code should be reviewed against the DDS by yet other different people.
Someone please tell me if and where the above logic is faulty. That is how things were done back in the day when spent far too many years on test teams for Boing 777 flight controls and such things.
Now, I can well imagine that in the modern world many do not have that rigorous DDS document. What with being all "agile" instead of "waterfall". Seems to me then that the tests in TDD are bing hoisted to the front of development as a replacement for that DDS document. Well, if you want to churn out buggy code that is the way to go I guess :)
It also seems to be that a lot of development forgoes the DDS and goes straight into code. Exploration is done. You don't know what you want until you have done it. Things may change along the way, functions, parameters, API's, protocols, whatever. It's right when it is right. Demanding tests be done upfront and then having to constantly change them to keep up with the exploration is just a daft waste of effort.
Having said all that, I would have more sympathy for TDD if it was ever said that the people writing the tests should should be other than those writing the code. Satisfying 5) above in the absence of a design spec.
God I hate all these self proclaimed software gurus who constantly advise us how to work, and then say "You were doing it wrong" when everything goes to shit.
If I ever have the privilege of working on a team, I will be taking all of this to heart. That being said, for my larger projects I do write up a pseudo-DDS which does greatly speed up my development.
@@ja31ya I wish you well on your journey . Working with teams can be a joyful and educational experience. When the team comprises people of like mind and know what they are doing and have a passion for the project. It can also be miserable. When nobody is enthusiastic, only in it for the money, communications is terrible and it's not possible to trust everyone is on the same page.
Software gurus who no longer write code tells you how to write code
True you need a viable design before starting work, since I work alone my design is informal but takes a lot of time before any meaningful work begins, it seems I am a dumb person because I spend a lot of time thinking about the design, even in the middle of the project when I see major flaws in the design, I stop dev effort and concentratre in design again
I don't know how Agile address the need for viable design and importantly stop when the design is flawed, adjust the design and throw away some sprints in the process
It is fun to see how so many programmers have taken "unit" to mean "function" or "class" when in its originality it means rather a "funtional unit of the application" and in the terms of viewing your application as a black box that "functional unit" actually equates to "functional requirement" and therefore is equivalent to a end-to-end integration test. The interface is the functional requirement that is derived from analysis and therefore the underlying implementation can change however much as necessary and the test will remain stable unless the actual functional requirement of the application changes. That change may be due to later on clarifying with more detail what the original requirement was suppose to be or it may be removed/replaced entirely during product life when that requirement no longer serves a purpose.
I urge programmers to try and use "unit" in terms of "functional requirement" when talking about TDD as this helps understand the concept better.
If you really want to drive the tests down till actual single functions/classes then you can do that as well but you have to realize then that the "black box" you are testing the "functional requirements" for is that class/function and when that class/function no longer is required by it's caller those tests are no longer useful in the wider scope of the application. Similarly as the end-to-end tests would not be meaningful when the application is deprecated/discarded.
In terms of developing you start from the outside in in terms of TDD. First tests for the application as a black box. Then during refactoring of your code you will have a better understanding of how you want to organize the internals of the application (which at that point already works since the outermost tests are green and thus refactoring is easier since you have guide rails to indicate when something broke) and thus you can start defining the "functional requirements" of some internal components and write tests for those and refactor the code to move those implementations into those modules/components.
With every level of depth you increase the chance that that implementation will become deprecated and those tests will therefore get discarded. Do with that assumption what you will.
what is testing...baby dont code me, dont code me... no morreee
My main problem with TDD is, it doesn't respect the cognitive process of complex problem-solving. My development flow is, I make it work, then I improve the quality.
And I never get the idea OOP and TDD can live in harmony. In the OOP world, it is ridiculous how people have gone far to make it is working (take the implementation and API of Mockito as an example). So you better familiarize yourself know the difference between Mock, Stub, Fake, and Spy before the next interview? And if you are familiar, you spend your cognitive energy in a paradox of choice situation wondering what to use?
In FP, you can unit tests just with the core language (libs could be nicer though).
I don't know how anyone justifies the high amount of investment (time, effort) to the little benefits TDD brings.
I'm not against applying TDD for the correct situation. In some situations, that's the best way forward. But TDD camp doesn't tell that. The say if you are not doing it all the time, you are a "cheater".
Great job @ThePrimeTime on reacting to the video. Some of your points are gold!
Eventually, TDD has almost no overhead. This has been the case for me in my code base. This took me about one to two weeks or so and mostly involved me exploring what the hell I was supposed to be doing in the code in the first place. Once I actually figured out what I wanted to do, the TDD cycle now takes me 15m-2hrs, depending on the feature, and it remains that way as I sparingly re-visit the code from time to time to make new changes. Most of the time I wasted in that 1-2 week period was not because I was re-writing tests, but because my understanding of how I was working with some frontend/backend technologies had changed entirely.
Regarding your question about the different testing terminology, as far as I know, generally, you should do something other than mock things out. That is a last resort, in my view. Most of all create a working interface. You know, perhaps if I've explained myself enough, in a way that you can run your unit tests with the fake sort of objects that are simulated in-memory and that same test would work when you use the actual implementation that uses a database/internet instead of something in-memory. At that point you're testing an interface and that's going to be a lot of test coverage and quicker dev time. That's a big reason why I enjoy Python's duck-typing and TDD.
The conclusion is very true IMHO, there's no point testing implementation details, you're just making your future life harder ; tests are meant to ease refactoring (or at least give you a certain level of confidence when doing it), not the other way around.
I feel like TDD might be useful for those heavily spec’d technologies. Or in a sense you can call TDD spec driven development. I think some mature blockchains such as ethereum can benefit from TDD as the specs are always heavily thought through, and won’t change easily.
Dave on his channel uses a term often: "behavior driven development".
I don't think most software change easily their specifications (once the core of the system works). Yes, you add features, but the core of the systems rarely change radically to the user's viewpoint. And if you do your tests in a TDD way, then the tests will be much easier to re-do once the specifications change, because it was made to favorize tests creation in the first place
I write my tests after, and it's not hard.
TDD guys saying I need to restructure my code if I don't practice TDD are mostly wrong, or with TDD I would have to restructure it anyways. No time won.
If you can't write testable code without TDD then that sounds like a skill issue to me.
There are definitely situations where it's hard to write tests after, but I think that largely misses the point of having to worry about that nuance. It's more of a persuasion technique to try and help convince you there's a simpler way of writing tests.
The point of doing TDD is that your design is led by tests, not that it was easier to write. The simplicity of writing beforehand is just a side effect but a major reason why people continue to do TDD.
16:31 that the biggest argument against tdd, you can have a 100% coverage project and still encounter bugs
No, that's just wrong. No one claims their favorite development method achieves bug-free code. We all know that's impossible, including the people advocating for whatever development/test method. The goals are usually to reduce the number of bugs, or to make code easier to maintain in the long run & similar claims.
So... is TDD just something you do to learn to write modules, starting from the very small pieces of code, and then when you've climbed that ladder you have a better understanding of how to write code that is testable and more modular, and you throw that ladder away and instead of being dogmatic about it you apply the lessons learned on a scale where it makes sense when it makes sense? So is TDD more like a kata for those who still don't know how to write code that is modular and easier to test?
I dunno about you but I feel so much easier with tdd. All I have to do is add a test see it pass and request a review.
I am genuenly under an impression that book definition of TDD doesn't actually work (or not for the reasons described) and they are just confusing correlation and causation. TDD forces you to write tests (which is good, and I need it >_
What infuriates me is how pointless prominent name dropping is - there's plenty of successful companies that don't practice TDD, so by the same logic, should we all just copy them?
Heck, Kent Beck worked for Facebook and he was in for a shock at their approach to testing (they barely practice it, they prefer testing in prod with the infra to rollback smoothly) and he came around to how there's more than one way of achieving software quality.
If you are making yourself do TDD, something is wrong.
TDD is just one of the ways of writing codes you naturally do in some situations.
More specifically, I write tests codes first when the codes I want to write are clear and obvious while tests codes are not.
In such cases, I write tests codes first to consolidate the strategy of how to test the codes.
I used TDD to develop stuff that’s easy to break. Like writing functions to read bytes and determine a variable length quantity for the midi spec. Just an easy way to verify it’s working. It makes send that nasa needs those types of tests for their calculations. TDD is a tool that can help when engineering things that are prone to breaking, but likely won’t need major refactoring. Like prime said, black box function testing makes tons of sense
I love TDD and I'm proud. It's a fantastic method for developing software and think almost every rebuttal people give for why TDD is bad is almost always a misunderstanding of how to write good tests.
But I'm also a communist who says the USSR just didn't do it right so.
Prime's point about tests not catching every bug. That's fine. That's not a fault of TDD. When a bug occurs that you didn't catch at first, before you fix the bug make a test that describes what your ideal output is given the state and input necesarry to create the bug, and verify that your assumption that the bug occurs when the state and input are X actually fails. And then fix it so it doesn't fail. This isn't a crime against TDD it's how TDD handles doing bugs. Now, you know that one bug will never come back. Because if it does your tests will catch it because it looks for that situation. If it fails another way then repeat the process.
Where did this insane idea that tests are written in stone and can never be updated or deleted or changed. That is not true. TDD is about documenting the intent of your code. If you no longer need the code, or if your test is no longer correct you can throw it away and change it. Just like we do to code. Imagine someone complaining that the requirements changed and now they need to change their code. It happens. Change the code.
@@anarchoyeasty3908 About making a new failing unit test that describes the bug, bingo. I think it also helps if you consider that you don't want to overcomplicate any single test either. Having two asserts in your test case versus one, just to catch that bug you didn't earlier isn't great design of a unit test.
As far as I understand, TDD is almost what you've said in the past that your approach is (writing flat, one-dimensional code at first and then refactoring it later), but with an added step at the beginning where design specifications become written into code
Tests don't add value to the product
I think I have a similar approach. My workflow usually is to 1) start with a dirty prototype, 2) write integration tests for the whole feature/module (sometimes I start with that), 3) refactor the shit out of it, 4) if I am happy with the result, write unit tests wherever it makes sense.
So I guess it's kinda TDD, but only partially and not always.
What you are doing sounds good and I feel is better then TDD, but it does not sould like TDD, where you need to write the test before writing functional code