Practice your TDD with a FREE hands-on tutorial where you can work along with me using an excellent practice tool. Sign up for your test driven development tutorial HERE ➡ courses.cd.training/courses/tdd-tutorial
When working with older code the policy we use now is based on boy scout rules. Leave the code cleaner than you found it. So if you go work on some old code add some unit tests to the part you are working oon. if you see some really bad code take the opportunity to clean it up a little, if reasonable, and add tests for it. Take that function that is 500 lines long and break it up into many functions with a much clearer interface for each one and then test them. I have found this actually ends up working quite rapidly to fix and older code base.
In my experience, the second hardest thing on moving to TDD in a legacy system was writing the test fixtures. Those systems tend to have a lot of dependencies, so you'll be writing a ton of mocks to test a single class. The hardest thing was getting developers on board. Even senior devs have trouble understanding how to approach the tests, and they are resistant to the initial productivity hit
Yeah, that's the price of attempting coverage via testing-after-development (TAD?) rather than TDD. Systems that don't have many unit tests from early on often end up tightly coupled. It's really complicated to try and wedge tests in between tightly coupled components. And then what really sucks about all of that is your tests become super tightly coupled to the implementation in turn, so they don't even help that much with the refactoring work that legacy code often can really use. So I totally understand why devs that understand the code would be reluctant to create a lot of complex, relatively brittle (!!) unit tests. It's a balancing act. Sometimes it makes more sense to do higher-order integration tests, characterization tests, to give you a safety net to carve out components you can uncouple, refactor entirely (easier to leverage TDD at that point) and then replace the legacy components with, one at a time. Not like that's easy, either, but you tend to end up in a better place overall because none of your tests are quite so throwaway at that point.
You probably don't want to start with writing mocks for dependencies. I tend to just write big-scope tests, and just use the dependencies, and only try to fake them when they're a problem (like database, filesystem, network access, etc.).
@@danielwilkowski5899 Exactly! Mocking all dependencies to isolate class under test (London style unit tests) is a terrible idea that is killing the European software industry. Americans are writing mostly classical unit tests without mocking object interactions.
My experience (of developing a middleware/integration platform) is... 1. You need integration tests if this is where the value of your system lies. 2. Edges to test in increasing difficulty are: File, database, simple REST Webservice; process spawning, sending Email; complex webservice; FTP/SSH; incoming email; ERPs/Business systems. 3. Dependent on your development framework,/language there may/may not be some test framework/addin which can help with the testing of the particular edge (mail, db, FTP, etc). Where there is, there will be varying degrees of maintenance. Database test frameworks generally are well maintained, mail testing less so. Where there isn't (for example FTP, SSH), you'll need to build your own. 4. Some systems are almost impossible to test 'properly'. ERP's and business systems with complex transactions can really only be tested by choosing a couple of key data points otherwise you're rebuilding the system's logic in your own test framework; the other fields/properties need to manually compared reviewed. 5. Your tests will evolve, become better and less brittle over time. Don't fret if they're a bit crap to begin with (at least you're doing something).
Seems like whether it's "glue code" vs something more computational/functional (in style not purpose) is that it tends to have side effects (an email, a DB transaction etc). You can still unit test these by using some mock framework as part of your unit testing. Maybe if you're actually testing for an email sent that's more of an e2e thing (I tend to forget the difference between e2e and integration testing).
Unit, integration, e2e... I've come to feel it as a gradient more than a distinct classification. E.g. what do you call a test that runs a query with some logic on a local, on-purpose database? It does IO, so it sounds not so unit, but how can you reduce its scope? Check the generated SQL against a constant? That wouldn't give me much confidence, especially if it's using implementation dependent features. It may even depend on how good I am at SQL. If it's fast, for me it's a good unit test.
We've stopped mocking our DB. Instead we focus on making setup and teardown fast. We still try to write code that doesn't need a DB - and where we have that, yeah, we test without a DB. But if the DB is an intrinsic part of the operation, then in my experience mocking the DB can hide significant bugs.
@@bobbycrosby9765 I think it also depends on how you're leveraging that DB - If it's SQL and you're doing simple selects/inserts/updates you can get away with mocking, but if you're utilising more (exotic) functionality then you need to ensure it works and the only way you can do this is by hitting the service itself.
My first rule is to abstract all the IO using a clean archi design. The for DB, since you have control over it, you can test your DAO/Repos using a real DB, in a BDD manner : "given I do a given write operation, that read operation should return this". For external APIs, I like to use a tool to record an actual real interaction with the API, that can then be replayed for the tests. The point being not to test that the implementation of the external API is correct, but that our interaction with it is correct. And then you can mock those IO adapters for your computational tests.
Dave, since you've written a couple of books now, how would you contrast your experience as a writer with that of a developer, specifically in regards TDD? At least for me, as a bit of a writer and coder myself, I see significant parallels between writing and coding. When we write, sometimes we start with one thought, but end up on some other tangent, that has to get put elsewhere. A lot of times we just have a vague idea and no clue where that will take us. We still have to go back and clean it up, so it's not a meandering mess (refactor). But the key thing is, we don't always know what the end result is going to be and we juggle ideas as we go. TDD seems to ask us to know what the end result of our writing is going to be in advance and that we need to stick to one and only one idea at a time and link them together in a very logical fashion. But this would seem to very much constrain the whole creative process of both coding and writing a like. There's of course the time to put on our editor's hat, but that doesn't tend to happen until we've written a fair bit. And of the writing advice I've gleaned is that one shouldn't wear both the writer and editor hats at the same time. Maybe this would be an interesting subject for one of the weekly short videos? How does TDD fit into the creative process?
As a non writer, I can only comment that I think the way to think about TDD here is just a feedback loop that is as short as possible and indicates that your written work conveys all of the core ideas that you want it to convey. For some writing, this may end up being a simple checklist that ensures that you cover everything. For other forms it may well be a meandering mess. The biggest issue is that I dont know how you get the main benefit, which is how short the feedback loop is. With writing, you will likely always have to reread sections with your list of requirements at hand to check that they are all met. And if you decide to move a concept to a new chapter, you end up haveing to read the entire book and keep it in your head to actually check the boxes. Maybe, the more important step is breaking down your writing to the core concepts you feel have to be there before you start writing. How they get conveyed seems like an implementation detail ;-).
@@SiasMey "the more important step is breaking down your writing to the core concepts you feel have to be there before you start writing. How they get conveyed seems like an implementation detail" In the writing world, that usually takes one of two forms: 1) the outline; 2) a "treatment". An outline is like a list of requirements, the key points/topics. A treatment is like a very short story/summary of the story you're writing, which encapsulates the work to be undertaken. And then yes, once you roll up your sleeves and get to it, that's when the details get fleshed out. But I've never written an entire book, so I can't speak to how much a writer keeps in his head. I suspect quite a lot =)
I would love to see a discussion on applying TDD to game development. It seems hard to do because so many things are glued to I/O of player interaction
@@ContinuousDelivery I'm in no hurry, this is a deep topic. You might find a guest who talks about that and have a podcast episode, so it wouldn't take a lot of your time
Great video Dave. I am the weirdo in my job who actually enjoys writing unit tests and believes in TDD. I find a lot of developers I meet, especially my fellow UI developers, don't see the point of unit tests or at least do it because it is a chore and in order to get their PRs approved. Why in your opinion do you think a lot of developers dislike or don't believe in writing unit tests?
@@francis_n because why doing more "work" when you can click everywhere and have the same result (from their point of view they don't lose time debugging 100 times their programs instead of unit testing it)
I was trying to teach my nephew some programming the other day and used TDD. He saw the value of it and literally said 'writing the test afterwards is just a chore'. What I tried to explain to the new guys is that now you are thinking of the problem and you want to capture that in code immediately. If you do it afterwards then you think about the problem, solve the problem and then think about the problem again...which is just a chore.
I’m of the opinion that there are 2 types of unit tests and that both are important. They are talking about BDD ones that give confidence that the code actually functions from the point of view of the interface and can drive development through TDD. BDD and TDD are great. The type I think is omitted and wrongly demonized is tightly coupled unit tests of small units. I think these can be written quickly and maintained easily. That doing so encourages code to be broke up into more small, readable, reusable, and maintainable units. And honestly, I think tools to automatically generate and regenerate these tests should be written and executed in watch mode during development. I essentially think we should be generating snapshots that are themselves tests that generate snapshots that display changes in execution results. It should form the extreme base of the testing pyramid and be entirely automated. Just press a key to accept the latest snapshots and they become part of source and can be reviewed in source control.
The best thing I've done to improve my testing was to stop wasting time testing small units. I start a story with a test or tests that describes the story I'm about to work on, and that's the only test(s) I'll write. That usually means I'm testing a high level API, like against a controller or service. I don't find that small tests are maintained easily; they get in the way of refactoring.
@@michaelslattery3050, I find that with at least Jest snapshots the tests largely update themselves, giving me information about runtime issues caused by myself or other devs that I would otherwise miss. Snapshots are less about tests that always pass and more about being informed of runtime changes. I also find that my technique for testing small units is so consistent and repetitive that it becomes almost too easy, making me almost too lazy to want to do other things. And so easy that I know it’s realistically achievable to fully automate it, making it self maintaining. So the argument that it would be hard to maintain would be a mute point in my mind. It’s super important too emphasize that I also believe in testing the way you say you test. I just think the other way will become fully automated, generated directly off the code you write without you needing to write it yourself.
I find tests introduce too much friction when prototyping. Creating and discarding different interfaces and base implementations constantly changes the tests when I just want to understand how the interfaces will fit together. Once I nail that down I want my tests. Thoughts?
I disagree, it is the tests that tell you that the interfaces are good. This is a common symptom of treating TDD as primarily a testing approach, rather than primarily a design approach. The idea of TDD is to use the test to "experience" your code as a user of it, if the test is difficult to write, your design is bad. If your tests change when you change your implementation, then your tests and/or your design is bad. This is good stuff to know when you are prototyping a design.
@@ContinuousDelivery "it is the tests that tell you that the interfaces are good" That's true IFF you ignore all the questions unit tests can't answer, with the most glaring one being performance at scale. I often write several prototype implementations of the same logic that can drastically differ in structure (think whole classes/components existing in some but not others), and end up throwing away all but the most performant version. For example: just this week I spent ~4 hours writing a prototype, and it would have taken just as long to write the tests, but the new approach ended up making performance worse, so the whole branch got scrapped. If I were to write tests before prototyping, I'd have wasted another 3 or 4 hours writing tests for code that doesn't exist anymore. Admittedly, I'm biased because so much of my time lately has been spent trying to eek out marginal gains in performance that can't be measured without deploying. I'm trying to embrace TDD more, but it seems poorly suited for solving those problems. Tho to be fair, I suppose technically I did write the load tests first...
@@Tw33ty271 You are correct in saying that TDD is not well suited to answering questions of performance. I believe what Dave was trying to highlight is that what TDD is good at is showing that your 4 implementations of the same thing are actually the same thing. In this way, the performance comparisons you make between them are actually real comparisons. A hard thing to for to wrap my head around was that unlike so many other things in development, you kinda want your TDD tests to be as far to the outside of your code as possible. My preference is that TDD actually just uses the public api that wraps my core logic. That way, I can re implement as much as I want within that core and never worry about the tests changing. Mocks which are so simple and delicious really do get in the way of this though and legacy systems get in the way of it even more. I have spent a lot of time thinking about your approach and problem vs what seems to be the intent behind TDD. My conclusion, TDD doesnt care about the problem you are trying to saddle it with, and that is one of its strengths. It is a specific solution to a specific problem. "How do I make the shortest possible feedback loop to give tell me I am moving towards or away from a completed implementation of my solution" as a side benefit "How do I tell my colleague that he is moving towards or away from a completed implementation of the solution when he maintains this next month" That is really the primary goal of TDD in my perspective, everything else is a side benefit derived from how it achieves th.
@@SiasMey First, thanks for the thoughtful response. I think you're correct both about the intent/value proposition of TDD, and what Dave was trying to highlight. I made my original comment shortly after finding this channel. Since then, I've consumed a lot more of it, and I think what Dave describes as BDD is much more in line with what you're describing, and what I'd like to implement. I think you summed it up perfectly here: >you kinda want your TDD tests to be as far to the outside of your code as possible. My preference is that TDD actually just uses the public api that wraps my core logic" I have found that outside this space, when people talk about TDD they're specifically talking about unit tests (for example, Uncle Bob) and that's was where the entirety of my frustration came from. At my work, we have stringent code coverage requirements, and a massive unit test suite that's very tied to the specific implementation details at any given moment. In another video, Dave points out that this happens in no small part because people write the code before the tests, but they (or at least I) do that because they intuitively understand that writing such low level tests is so often a complete waste of time. I'm taking the initiative at work to start writing a brand new suite of tests that exists outside the project itself and does very thorough testing of our public API. So far, it's looking like doing that is going to take about as much time as it did to write the app itself, but I'm far less bothered by spending that time because I know at the end of it I'll have a test suite that's going to exist (and actually be useful) for more than a week. > My conclusion, TDD doesnt care about the problem you are trying to saddle it with, and that is one of its strengths. It is a specific solution to a specific problem. I agree, and that's kinda the point I was trying to make. It was pitched to me as the basic philosophy that underpins all good development, but I see it more as a very effective tool to solve a very common problem. I think most of my issues with it come from the fact that it's a hammer I was being told to use to drive screws, if that makes sense
in this era where youth are popular celebs, and driving innovation and tech,whilst old people are seen as boring and without passion. Great to see a programmer of your age,wearing cool TShirts that youth wear. Imo cool fancy TShirts that young programmers wear make young people think programming is cool, and since I am young I can be a programmer like them. Hope old people will get inspired by you and your cool shirts. Not to mention that instead of boring scholarly style that old people have, I as a young person liked your informal-ish style. Love the thumbnails too, lol some are quite funny.
He didn't say you can't write the tests first. I interpreted this as "when you need to change things, write the failing test first (red) then write the code (green)" as expected in a normal TDD cycle.
This is generally pretty hard. TDD in an existing codebase without unit tests is ... hard. But, yes, nothing stops you from abstracting the piece you need to change and then writing a test that demand the feature you need to add. The main thing that makes it hard is that the code you need to change is often very difficult to tests. Because if you are used to using TDD, then the code you are used to working on tends to be a lot easier to test.
@@SiasMey the thing is that as you get better at TDD, you in turn get better at generally writing clean, loosely coupled code. To where, if you've gotten "good enough," you can probably write fairly clean code even without tests (of course, it might not do the right thing, that's a different question). So the "design" part of TDD (some people even call it test-driven-design since it really encourages loose coupling, single responsibility, etc.) is something that eventually you can do, at least to a good extent, just by knowing what it means for code to be "testable." That gives you a bit of wiggle room in terms of when you really ought to write tests first, and when you can write them later, especially if -- again, through experience -- you have some intuition around what tests are going to be complex (expensive) to write and maintain relative to others. Or what tests may become redundant once you get a higher-order test working that covers the bit you're currently glossing over. I bet someone like Dave can write code that looks as if it were test-driven, or close enough, just because he's written and seen enough clean code and smelly code and can easily tell the difference. Whether that's something one ought to ever do... well, I don't think the answer is obviously "no." There are times when it is probably sensible, but it requires some measure of experience and wisdom.
@@sciros I completely agree. There is an odd chicken and egg thing though. Yes, you get better at writing testable code, but you also start to think.. "If its so easy to test, why dont I just test it." There is always always wiggle room in these discussions and approaches. I believe the biggest trap for experienced practitioners is to assume that their experience translates simply and easily. I learned TDD from youtube, and its taken me two years to get to a place where I can start to think about what it really means and what its really about. I look at some of the first videos I watched and realize... Oh, that means something totally different now that I have more experience. In short, its hard, and educating others is even harder. To me, its simpler to just say, "Just write the test" the subtext is "When you are good enough to decide that you dont need the test, you will probably be good enough to just write it and help the next person."
Its fine being a fan of TDD but i think this channel tends to promote it like it's the "obvious best choice", maybe it is a lot of the time but in my experience it would've made things a lot more difficult, so i see it as a situational benefit
Practice your TDD with a FREE hands-on tutorial where you can work along with me using an excellent practice tool. Sign up for your test driven development tutorial HERE ➡ courses.cd.training/courses/tdd-tutorial
When working with older code the policy we use now is based on boy scout rules. Leave the code cleaner than you found it. So if you go work on some old code add some unit tests to the part you are working oon. if you see some really bad code take the opportunity to clean it up a little, if reasonable, and add tests for it. Take that function that is 500 lines long and break it up into many functions with a much clearer interface for each one and then test them. I have found this actually ends up working quite rapidly to fix and older code base.
In my experience, the second hardest thing on moving to TDD in a legacy system was writing the test fixtures. Those systems tend to have a lot of dependencies, so you'll be writing a ton of mocks to test a single class.
The hardest thing was getting developers on board. Even senior devs have trouble understanding how to approach the tests, and they are resistant to the initial productivity hit
Yeah, that's the price of attempting coverage via testing-after-development (TAD?) rather than TDD. Systems that don't have many unit tests from early on often end up tightly coupled. It's really complicated to try and wedge tests in between tightly coupled components. And then what really sucks about all of that is your tests become super tightly coupled to the implementation in turn, so they don't even help that much with the refactoring work that legacy code often can really use. So I totally understand why devs that understand the code would be reluctant to create a lot of complex, relatively brittle (!!) unit tests. It's a balancing act. Sometimes it makes more sense to do higher-order integration tests, characterization tests, to give you a safety net to carve out components you can uncouple, refactor entirely (easier to leverage TDD at that point) and then replace the legacy components with, one at a time. Not like that's easy, either, but you tend to end up in a better place overall because none of your tests are quite so throwaway at that point.
You probably don't want to start with writing mocks for dependencies. I tend to just write big-scope tests, and just use the dependencies, and only try to fake them when they're a problem (like database, filesystem, network access, etc.).
@@danielwilkowski5899 Exactly! Mocking all dependencies to isolate class under test (London style unit tests) is a terrible idea that is killing the European software industry. Americans are writing mostly classical unit tests without mocking object interactions.
If you use functional programming you dont need mocks. If you use good types and assertions, you barely need unit tests at all.
My experience (of developing a middleware/integration platform) is...
1. You need integration tests if this is where the value of your system lies.
2. Edges to test in increasing difficulty are: File, database, simple REST Webservice; process spawning, sending Email; complex webservice; FTP/SSH; incoming email; ERPs/Business systems.
3. Dependent on your development framework,/language there may/may not be some test framework/addin which can help with the testing of the particular edge (mail, db, FTP, etc). Where there is, there will be varying degrees of maintenance. Database test frameworks generally are well maintained, mail testing less so. Where there isn't (for example FTP, SSH), you'll need to build your own.
4. Some systems are almost impossible to test 'properly'. ERP's and business systems with complex transactions can really only be tested by choosing a couple of key data points otherwise you're rebuilding the system's logic in your own test framework; the other fields/properties need to manually compared reviewed.
5. Your tests will evolve, become better and less brittle over time. Don't fret if they're a bit crap to begin with (at least you're doing something).
You need integration tests if your code integrates with another application.
Unit tests are almost worthless when writing a distributed system.
Seems like whether it's "glue code" vs something more computational/functional (in style not purpose) is that it tends to have side effects (an email, a DB transaction etc). You can still unit test these by using some mock framework as part of your unit testing. Maybe if you're actually testing for an email sent that's more of an e2e thing (I tend to forget the difference between e2e and integration testing).
Unit, integration, e2e... I've come to feel it as a gradient more than a distinct classification. E.g. what do you call a test that runs a query with some logic on a local, on-purpose database? It does IO, so it sounds not so unit, but how can you reduce its scope? Check the generated SQL against a constant? That wouldn't give me much confidence, especially if it's using implementation dependent features. It may even depend on how good I am at SQL. If it's fast, for me it's a good unit test.
We've stopped mocking our DB. Instead we focus on making setup and teardown fast.
We still try to write code that doesn't need a DB - and where we have that, yeah, we test without a DB. But if the DB is an intrinsic part of the operation, then in my experience mocking the DB can hide significant bugs.
@@bobbycrosby9765 I think it also depends on how you're leveraging that DB - If it's SQL and you're doing simple selects/inserts/updates you can get away with mocking, but if you're utilising more (exotic) functionality then you need to ensure it works and the only way you can do this is by hitting the service itself.
My first rule is to abstract all the IO using a clean archi design. The for DB, since you have control over it, you can test your DAO/Repos using a real DB, in a BDD manner : "given I do a given write operation, that read operation should return this". For external APIs, I like to use a tool to record an actual real interaction with the API, that can then be replayed for the tests. The point being not to test that the implementation of the external API is correct, but that our interaction with it is correct. And then you can mock those IO adapters for your computational tests.
Mocking frameworks are terrible! Stay away from them. Use integration tests for testing IO, DB...
Dave, since you've written a couple of books now, how would you contrast your experience as a writer with that of a developer, specifically in regards TDD? At least for me, as a bit of a writer and coder myself, I see significant parallels between writing and coding.
When we write, sometimes we start with one thought, but end up on some other tangent, that has to get put elsewhere. A lot of times we just have a vague idea and no clue where that will take us. We still have to go back and clean it up, so it's not a meandering mess (refactor). But the key thing is, we don't always know what the end result is going to be and we juggle ideas as we go.
TDD seems to ask us to know what the end result of our writing is going to be in advance and that we need to stick to one and only one idea at a time and link them together in a very logical fashion. But this would seem to very much constrain the whole creative process of both coding and writing a like.
There's of course the time to put on our editor's hat, but that doesn't tend to happen until we've written a fair bit. And of the writing advice I've gleaned is that one shouldn't wear both the writer and editor hats at the same time.
Maybe this would be an interesting subject for one of the weekly short videos? How does TDD fit into the creative process?
As a non writer, I can only comment that I think the way to think about TDD here is just a feedback loop that is as short as possible and indicates that your written work conveys all of the core ideas that you want it to convey.
For some writing, this may end up being a simple checklist that ensures that you cover everything. For other forms it may well be a meandering mess.
The biggest issue is that I dont know how you get the main benefit, which is how short the feedback loop is. With writing, you will likely always have to reread sections with your list of requirements at hand to check that they are all met. And if you decide to move a concept to a new chapter, you end up haveing to read the entire book and keep it in your head to actually check the boxes.
Maybe, the more important step is breaking down your writing to the core concepts you feel have to be there before you start writing. How they get conveyed seems like an implementation detail ;-).
@@SiasMey "the more important step is breaking down your writing to the core concepts you feel have to be there before you start writing. How they get conveyed seems like an implementation detail"
In the writing world, that usually takes one of two forms: 1) the outline; 2) a "treatment". An outline is like a list of requirements, the key points/topics. A treatment is like a very short story/summary of the story you're writing, which encapsulates the work to be undertaken. And then yes, once you roll up your sleeves and get to it, that's when the details get fleshed out. But I've never written an entire book, so I can't speak to how much a writer keeps in his head. I suspect quite a lot =)
I would love to see a discussion on applying TDD to game development.
It seems hard to do because so many things are glued to I/O of player interaction
I have it on my backlog for a video, but it is a fair bit of effort, and I have limited time per episode - I also have to make a living 😉
@@ContinuousDelivery I'm in no hurry, this is a deep topic.
You might find a guest who talks about that and have a podcast episode, so it wouldn't take a lot of your time
Great video Dave. I am the weirdo in my job who actually enjoys writing unit tests and believes in TDD. I find a lot of developers I meet, especially my fellow UI developers, don't see the point of unit tests or at least do it because it is a chore and in order to get their PRs approved. Why in your opinion do you think a lot of developers dislike or don't believe in writing unit tests?
Allow me to put in my experience.
It's because they write trash/brittle test tied to implementation details
@@captainnoyaux completely agree but even before that they dont even want to write any tests. Why?
@@francis_n because why doing more "work" when you can click everywhere and have the same result (from their point of view they don't lose time debugging 100 times their programs instead of unit testing it)
@@captainnoyaux sounds wasteful to me lol.
I was trying to teach my nephew some programming the other day and used TDD. He saw the value of it and literally said 'writing the test afterwards is just a chore'. What I tried to explain to the new guys is that now you are thinking of the problem and you want to capture that in code immediately. If you do it afterwards then you think about the problem, solve the problem and then think about the problem again...which is just a chore.
I’m of the opinion that there are 2 types of unit tests and that both are important. They are talking about BDD ones that give confidence that the code actually functions from the point of view of the interface and can drive development through TDD. BDD and TDD are great.
The type I think is omitted and wrongly demonized is tightly coupled unit tests of small units. I think these can be written quickly and maintained easily. That doing so encourages code to be broke up into more small, readable, reusable, and maintainable units. And honestly, I think tools to automatically generate and regenerate these tests should be written and executed in watch mode during development.
I essentially think we should be generating snapshots that are themselves tests that generate snapshots that display changes in execution results. It should form the extreme base of the testing pyramid and be entirely automated. Just press a key to accept the latest snapshots and they become part of source and can be reviewed in source control.
The best thing I've done to improve my testing was to stop wasting time testing small units. I start a story with a test or tests that describes the story I'm about to work on, and that's the only test(s) I'll write. That usually means I'm testing a high level API, like against a controller or service. I don't find that small tests are maintained easily; they get in the way of refactoring.
@@michaelslattery3050, I find that with at least Jest snapshots the tests largely update themselves, giving me information about runtime issues caused by myself or other devs that I would otherwise miss. Snapshots are less about tests that always pass and more about being informed of runtime changes. I also find that my technique for testing small units is so consistent and repetitive that it becomes almost too easy, making me almost too lazy to want to do other things. And so easy that I know it’s realistically achievable to fully automate it, making it self maintaining. So the argument that it would be hard to maintain would be a mute point in my mind.
It’s super important too emphasize that I also believe in testing the way you say you test.
I just think the other way will become fully automated, generated directly off the code you write without you needing to write it yourself.
I find tests introduce too much friction when prototyping. Creating and discarding different interfaces and base implementations constantly changes the tests when I just want to understand how the interfaces will fit together. Once I nail that down I want my tests. Thoughts?
I disagree, it is the tests that tell you that the interfaces are good. This is a common symptom of treating TDD as primarily a testing approach, rather than primarily a design approach. The idea of TDD is to use the test to "experience" your code as a user of it, if the test is difficult to write, your design is bad. If your tests change when you change your implementation, then your tests and/or your design is bad. This is good stuff to know when you are prototyping a design.
@@ContinuousDelivery "it is the tests that tell you that the interfaces are good" That's true IFF you ignore all the questions unit tests can't answer, with the most glaring one being performance at scale. I often write several prototype implementations of the same logic that can drastically differ in structure (think whole classes/components existing in some but not others), and end up throwing away all but the most performant version. For example: just this week I spent ~4 hours writing a prototype, and it would have taken just as long to write the tests, but the new approach ended up making performance worse, so the whole branch got scrapped. If I were to write tests before prototyping, I'd have wasted another 3 or 4 hours writing tests for code that doesn't exist anymore.
Admittedly, I'm biased because so much of my time lately has been spent trying to eek out marginal gains in performance that can't be measured without deploying. I'm trying to embrace TDD more, but it seems poorly suited for solving those problems.
Tho to be fair, I suppose technically I did write the load tests first...
@@Tw33ty271 You are correct in saying that TDD is not well suited to answering questions of performance. I believe what Dave was trying to highlight is that what TDD is good at is showing that your 4 implementations of the same thing are actually the same thing.
In this way, the performance comparisons you make between them are actually real comparisons.
A hard thing to for to wrap my head around was that unlike so many other things in development, you kinda want your TDD tests to be as far to the outside of your code as possible. My preference is that TDD actually just uses the public api that wraps my core logic. That way, I can re implement as much as I want within that core and never worry about the tests changing. Mocks which are so simple and delicious really do get in the way of this though and legacy systems get in the way of it even more.
I have spent a lot of time thinking about your approach and problem vs what seems to be the intent behind TDD. My conclusion, TDD doesnt care about the problem you are trying to saddle it with, and that is one of its strengths. It is a specific solution to a specific problem.
"How do I make the shortest possible feedback loop to give tell me I am moving towards or away from a completed implementation of my solution"
as a side benefit
"How do I tell my colleague that he is moving towards or away from a completed implementation of the solution when he maintains this next month"
That is really the primary goal of TDD in my perspective, everything else is a side benefit derived from how it achieves th.
@@SiasMey First, thanks for the thoughtful response. I think you're correct both about the intent/value proposition of TDD, and what Dave was trying to highlight. I made my original comment shortly after finding this channel. Since then, I've consumed a lot more of it, and I think what Dave describes as BDD is much more in line with what you're describing, and what I'd like to implement. I think you summed it up perfectly here:
>you kinda want your TDD tests to be as far to the outside of your code as possible. My preference is that TDD actually just uses the public api that wraps my core logic"
I have found that outside this space, when people talk about TDD they're specifically talking about unit tests (for example, Uncle Bob) and that's was where the entirety of my frustration came from. At my work, we have stringent code coverage requirements, and a massive unit test suite that's very tied to the specific implementation details at any given moment. In another video, Dave points out that this happens in no small part because people write the code before the tests, but they (or at least I) do that because they intuitively understand that writing such low level tests is so often a complete waste of time.
I'm taking the initiative at work to start writing a brand new suite of tests that exists outside the project itself and does very thorough testing of our public API. So far, it's looking like doing that is going to take about as much time as it did to write the app itself, but I'm far less bothered by spending that time because I know at the end of it I'll have a test suite that's going to exist (and actually be useful) for more than a week.
> My conclusion, TDD doesnt care about the problem you are trying to saddle it with, and that is one of its strengths. It is a specific solution to a specific problem.
I agree, and that's kinda the point I was trying to make. It was pitched to me as the basic philosophy that underpins all good development, but I see it more as a very effective tool to solve a very common problem. I think most of my issues with it come from the fact that it's a hammer I was being told to use to drive screws, if that makes sense
in this era where youth are popular celebs, and driving innovation and tech,whilst old people are seen as boring and without passion.
Great to see a programmer of your age,wearing cool TShirts that youth wear. Imo cool fancy TShirts that young programmers wear make young people think programming is cool, and since I am young I can be a programmer like them.
Hope old people will get inspired by you and your cool shirts. Not to mention that instead of boring scholarly style that old people have, I as a young person liked your informal-ish style. Love the thumbnails too, lol some are quite funny.
exactly. i'm a 25 year old engineer and Continuous Delivery is low key the best tech youtuber compared to the sensational stuff usually out there.
"test as you need to change things"
That isn't TDD, though. TDD requires that you write the tests first. Maybe TDD should be less dogmatic.
He didn't say you can't write the tests first. I interpreted this as "when you need to change things, write the failing test first (red) then write the code (green)" as expected in a normal TDD cycle.
This is generally pretty hard. TDD in an existing codebase without unit tests is ... hard. But, yes, nothing stops you from abstracting the piece you need to change and then writing a test that demand the feature you need to add.
The main thing that makes it hard is that the code you need to change is often very difficult to tests. Because if you are used to using TDD, then the code you are used to working on tends to be a lot easier to test.
@@SiasMey the thing is that as you get better at TDD, you in turn get better at generally writing clean, loosely coupled code. To where, if you've gotten "good enough," you can probably write fairly clean code even without tests (of course, it might not do the right thing, that's a different question). So the "design" part of TDD (some people even call it test-driven-design since it really encourages loose coupling, single responsibility, etc.) is something that eventually you can do, at least to a good extent, just by knowing what it means for code to be "testable." That gives you a bit of wiggle room in terms of when you really ought to write tests first, and when you can write them later, especially if -- again, through experience -- you have some intuition around what tests are going to be complex (expensive) to write and maintain relative to others. Or what tests may become redundant once you get a higher-order test working that covers the bit you're currently glossing over.
I bet someone like Dave can write code that looks as if it were test-driven, or close enough, just because he's written and seen enough clean code and smelly code and can easily tell the difference. Whether that's something one ought to ever do... well, I don't think the answer is obviously "no." There are times when it is probably sensible, but it requires some measure of experience and wisdom.
@@sciros I completely agree.
There is an odd chicken and egg thing though. Yes, you get better at writing testable code, but you also start to think.. "If its so easy to test, why dont I just test it."
There is always always wiggle room in these discussions and approaches.
I believe the biggest trap for experienced practitioners is to assume that their experience translates simply and easily.
I learned TDD from youtube, and its taken me two years to get to a place where I can start to think about what it really means and what its really about. I look at some of the first videos I watched and realize... Oh, that means something totally different now that I have more experience.
In short, its hard, and educating others is even harder. To me, its simpler to just say, "Just write the test" the subtext is "When you are good enough to decide that you dont need the test, you will probably be good enough to just write it and help the next person."
Its fine being a fan of TDD but i think this channel tends to promote it like it's the "obvious best choice", maybe it is a lot of the time but in my experience it would've made things a lot more difficult, so i see it as a situational benefit