🚀 TDD, Where Did It All Go Wrong (Ian Cooper)

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 598

  • @kevinfleischer2049
    @kevinfleischer2049 3 роки тому +196

    I like this talk. What I learned:
    "Tests protect something. You want them to protect requirements, not implementation details. Thats why you delete tests that were only there to make it happen, but are not a result of a requirement"

    • @DEvilParsnip
      @DEvilParsnip 3 роки тому +7

      a beautiful TLDR , that you soooooooooo much.

    • @JorgetePanete
      @JorgetePanete 11 місяців тому

      That's*

    • @TheDavidlloydjones
      @TheDavidlloydjones 10 місяців тому

      That's loony, Kevin. Tests are small, mechanical procedures. They challenge mechanical workings of a program.
      Getting reuirements and design right is an intellecual (and political) process. It has to be done by people representing the interests being served by the work at hand.
      It's not a matter of test, it's a matter of debate and judgement.

    • @kevinfleischer2049
      @kevinfleischer2049 10 місяців тому +1

      @@TheDavidlloydjones What you describe regarding requirements is not opposing to what I wrote.
      But you are wrong regarding how tests should be written. The way you describe will result in tests that break when you refactor SW. Thus your tests will hinder changes. It is a mistake that I've done in the past but one that is worth preventing.

    • @ForgottenKnight1
      @ForgottenKnight1 9 місяців тому

      When management makes the test coverage of some tool a "requirement" the whole TDD thing goes tits up. I've seen it a dozen times and a dozen times it has the same result - a lot of tests that don't test jack shit, they are just there to do some coverage by either "verifying" something or "testing" implementation details.

  • @gareth9012
    @gareth9012 3 роки тому +77

    15 years ago in my one of my first contract programming jobs (I was a late starter), Ian took me onto his team, assigned me a more experienced programmer as a mentor and told me to read Kent Beck's TDD book. It completely changed the way I approached programming. I'm immensely grateful of the time he took to teach me. Great guy.

  • @simonvv1002
    @simonvv1002 4 роки тому +702

    Notes I made during this presentation (just a dump, might be useful to some):
    - Test requirements, not low level
    - Test public API. Given when then
    - Test the exports from a module
    - Focus on higher-level
    - Test modules, not class
    - Refactoring is needed to see what is implementation and what is exports from module
    - Test behaviours
    - Think about your code as an api
    - Test the abstraction, not the implementation
    - Test are isolated and with shared fixture (to run quickly)
    - Red-green-refactor (go fast to working code)
    - No new tests during refactoring
    - Heavy coupling is the problem with all software
    - Thin public api
    - Refactoring = changing internals
    - Patterns in the refactoring
    - If you're not really sure, write tests for implementation (delete the tests)
    - Not classes, behaviours
    - Don't isolate classes in testing
    - Private methods (these are implementation details)

    • @vborovikov
      @vborovikov 4 роки тому +11

      so it's just common sense?

    • @DanHaiduc
      @DanHaiduc 4 роки тому +23

      @@vborovikov Yes, but people don't seem to be having it. Hence the need for this talk.

    • @DanHaiduc
      @DanHaiduc 4 роки тому +8

      Also, keep tests fast (

    • @dyyd0
      @dyyd0 4 роки тому +13

      @@DanHaiduc Clarification: whole test suite should run in under a few minutes, not just one unit test. And having DB communication during testing is OK as long as it is isolated from other tests (by either cleaning it or using exclusive instances or what ever. Basically memory based db that is recreated for each test).

    • @DanHaiduc
      @DanHaiduc 4 роки тому +4

      @@dyyd0 You are absolutely right. After watching Fast Test, Slow Test I have realized that the whole test suite should run in **less than two seconds**. It is possible if you don't invoke any big frameworks during the tests:
      ua-cam.com/video/RAxiiRPHS9k/v-deo.html

  • @pavel_espinal
    @pavel_espinal 2 роки тому +23

    I must confess that I've been mostly reluctant to the idea of TDD until just now. This is how TDD should have been "sold" or introduced from the beginning.
    Many people make TDD sound like if you have to know how the implementation of your methods is going to look like even before writing your first line of code.
    Outstanding talk.

  • @ruixue6955
    @ruixue6955 3 роки тому +59

    21:00 where did it go wrong in TDD
    22:30 recommended book
    23:53 24:01 *do not test implementation details, test behaviors*
    24:15 in *classic modern TDD cycle* , I will write a test before add that method, and that test will govern will that method succeeds or fails 24:30 *the trigger to writing a test in TDD practice is essentially adding a method to a class, THAT IS THE WRONG THING!*
    24:50 *THE TRIGGER in TDD for creating a new test is that you have requirement, you want to implement*
    26:03 testing the public API
    26:13 what is the contract your software has with the world 26:28 (API) will not change rapidly 26:36 *how you implement that requirement (contract, API) is unstable*
    26:44 what your software offers to consumers is the *stable contract, that is what you should test*
    26:54 not HTTP API
    28:46 SUT (system under test) is not a class

  • @alinaqvi2638
    @alinaqvi2638 5 років тому +503

    This guy is speaking from hard earn't real-world experience. We need more engineers like him imparting their knowledge. Rather than 25 year olds who have real industry experience of 1.7 years and have already written 2 books.

    • @626Pilot
      @626Pilot 5 років тому +33

      Robert C. Martin has been programming for over 50 years, and he disagrees. He says go for 100% coverage. If you don't test implementation, you are going to miss bugs and design flaws.

    • @ishcatu
      @ishcatu 5 років тому +23

      @@626Pilot Robert C. Martin actually agrees with what is said in this talk. It can be read in his article about test contra-variance. Just google: cleancoder test contra-variance.

    • @redhotbits
      @redhotbits 5 років тому +20

      @@626Pilot uncle bob is full of shit

    • @nicogreco6926
      @nicogreco6926 4 роки тому +75

      "He says go for 100% coverage..." Code Coverage is a hollow stat that many inexperienced developers put far to high emphasis on. You can still have various bugs with 100% code coverage, just because you covered all cases doesn't mean the cases are valid behavior.

    • @tamashumi7961
      @tamashumi7961 4 роки тому +6

      @Peter Mortensen no test fails when a comment starts telling lies due to the underlying code changes.
      This is why better avoid comments. I appreciate there might be complexity in certain systems which is hard to grasp withouta comment though.

  • @ikeo8666
    @ikeo8666 4 роки тому +223

    The problem comes from tools that do "CODE COVERAGE". Because of that metric, devs just end up testing their implementation so the chart that goes to the bosses and regulators is "oh look 99% code coverage of tests" when in practice it's doing absolutely nothing to improve the code.

    • @batesjernigan1773
      @batesjernigan1773 3 роки тому +14

      I get the knee jerk reaction but IMO it can tell you a lot if it's below 80%. I think there should be a minimum but chasing 100% coverage isn't something worth having for the reason you mentioned.

    • @robertwhite3503
      @robertwhite3503 3 роки тому +6

      Yes, I think test coverage tools mat have led me astray. I don't use them but it implies that all methods should be tested. Also the test first methodology makes me think of testing methods because I think in terms of writing methods. However I am going to try writing tests based on the expected behaviour. Generally everything I write starts with a web page and ends in a database update. Mostly of this boils down to the public services which are often not complex so may end up testing methods anyway. Where I have private methods this is arguably the tricky stuff that needs testing but may-be not.

    • @fennecbesixdouze1794
      @fennecbesixdouze1794 3 роки тому +24

      No, that's not where the problem comes from. There's a fairly big name in extreme programming who does hundreds of talks that everyone watches and listens to who repeatedly ad-nauseam says you must write a test before writing each single new line of code. He even says you cannot call yourself a professional if you don't do that. The problem is that we have people like that, leaders and speakers in our field, who are talking out of their ass.

    • @EdouardTavinor
      @EdouardTavinor 2 роки тому +8

      i once had the idea of generating a file with a million lines in which each line just adds 1 to a number. at the end i write an unit test: assert(myFunction(), 1000000). And instantly our code coverage goes up 30%!

    • @danvilela
      @danvilela 2 роки тому +1

      This is because of bosses asking for it.. not because of the tool itself

  • @common_collective
    @common_collective 6 років тому +138

    I never understood the benefits of TDD until I watched this talk. This is gold. Ian Cooper seems to be about the only person making sense on this subject

    • @SallyWaters24
      @SallyWaters24 6 років тому +8

      I'd add to the short list: Kent Dodds.

    • @cryp0g00n4
      @cryp0g00n4 4 роки тому +1

      Yea he definitely seems like he has a real understanding from his experiences as a software engineer and did what the book he described did for him - which was condense his years of experience in succint medium.

    • @developer_hatch
      @developer_hatch 2 роки тому

      This comment is duplicated... odd.

    • @0netom
      @0netom 2 роки тому +1

      IIRC, David Chelimsky's RSpec book made me understand benefits of TDD and trick of writing tests first. Then Uncle Bob's Clean Code series deepened my understanding. Now, 15 years later, I still struggle with using it, because when you work in teams, your colleagues are not disciplined enough to follow such practices closely and once they made a mess, it's really hard to stay on course... :( But it worth the effort!

    • @encapsulatio
      @encapsulatio 2 роки тому

      @@0netom How is using TDD helpful if you already can just evaluate every function in Clojure?

  • @gamemusicmeltingpot2192
    @gamemusicmeltingpot2192 4 роки тому +32

    this talk is twice as effective after I have used TDD myself, helped me identify mistakes or incorrect ways of thinking a lot

  • @lucasterable
    @lucasterable 2 роки тому +23

    35:15 "the unit in isolation is the test". This is HUGE! Need a quote from Beck's book.

  • @colin7406
    @colin7406 4 роки тому +93

    You know this is a good video when you look at the like to dislike ratio with the maddening creaking going on through out the presentation

    • @MaZeeT
      @MaZeeT 2 роки тому +16

      Didn't aged to well :(

    • @Oktokolo
      @Oktokolo 2 роки тому +3

      The creaking likely comes from something rubbing at the mic or its cable whenever he moves. Testing the recording setup before going on air still is extremely underrated...

    • @basileagle314
      @basileagle314 2 роки тому +11

      @@Oktokolo maybe the conference should have followed TDD principles

    • @Oktokolo
      @Oktokolo 2 роки тому +2

      @@basileagle314 I wonder, what TDD would look like outside development...

    • @basileagle314
      @basileagle314 2 роки тому +3

      @@Oktokolo before you start using the microphone you talk normally in a large room to make sure they can't hear you at the back

  • @geshtu1760
    @geshtu1760 5 років тому +44

    I clearly hold a minority opinion, but it still seems to me that TDD (and all ideologies like it) comes from a magical place where business/user requirements are clear and determined in advance and never change, and developers somehow not only know in advance how they will develop a thing and what challenges they will face, they also somehow know the best way to do it, such that their design / API will not change. After coding for over 20 years and working many years in automated test, I have yet to meet such a developer. Coding involves exploration. You can't test for what you haven't yet discovered, and only a fool pretends there is nothing to discover.
    My disagreement with TDD is not because I think it is slower. It is because I just don't think developers (or anyone for that matter) are good at predicting the future. I suspect what ends up happening is that tests get updated after the software is done, which is fine but it's not what we're being sold.
    I have not written (m)any pieces of software where I knew the correct design or even how it would eventually work, before I started. I have a rough idea, but you can't write tests for a rough idea and expect nothing to change. Invariably the process of writing software involves some measure of exploration of the problem space and some iteration over various interfaces and designs, before eventually the API or UI or design begins to take shape and stabilise. Often the first attempt is not the best one, and yet you need to make that first attempt in order to better understand the problem space, and by then you will have gained the knowledge required to implement a more robust solution. Sometimes this process repeats several times. This solution may well (and indeed often does) change the contract with its would-be consumers (but that's ok because they haven't been finalised yet either). It seems to me that if you write the tests first, your lack of knowledge/foresight of the final product will be reflected in the tests, and they will then constrain the implementation unnecessarily, as you inevitably implement the code to conform to the tests rather than to solve the problem you originally intended to.
    Is it really so bad to write software, or even just a component, and THEN write the tests now that you better understand how all the pieces fit together? You write the code, and iterate until the design feels right. The interface is also perfected here. Then having understood the thing to be tested, you (or better, someone else who understands the thing to be tested, or better still, both) write the tests that are designed to (a) validate the contract between software and consumer and (b) try to break that contract. Apologies for long rant but I assume someone will correct me and I will learn something new.

    • @raulvazquez3511
      @raulvazquez3511 4 роки тому

      > comes from a magical place where business/user requirements are clear and determined in advance and never change
      It takes several iterations to get the requirements right, and on each iteration, they will change. Looking at the latter part of your comment, I think you agree on this.
      > Coding involves exploration
      That's the key, where TDD helps is on exploring the unit (as a use case, not a class) instead of the entire system.
      > You write the code, and iterate until the design feels right. [...] having understood the thing to be tested, you (or better, someone else who understands the thing to be tested, or better still, both) write the tests that are designed to (a) validate the contract between software and consumer and (b) try to break that contract.
      Yeah, that's it. The difference with test-first TDD is and the iteration is done. Instead of favoring the iterations on design, it favors the iterations on getting the requirements right.
      Around 40:20 Ian goes deeper into the Red Green Refactor cycle. First, the requirements (with tests) are a bare definition, something that doesn't work (red), some code is written and to make the test work, at this point the code is DIRTY and SINFUL, there IS NOT good design, but this initial design and this initial definition can provide feedback quickly, it can be validated and with more details come a better understanding (even validate them with someone else as you said), a few iterations might be needed to get to an acceptable behavior (green) the important thing is to not forget the exploration, to no forget the discoveries, the new details of the requirements, and the way TDD does it, is writing them in the test, once there, the design can be improved (refactor), so the design can be changed without changing the requirements (already validated by tests at this point).
      With this approach, the requirements details (and their validation, with test) come before the design. The logic behind it, is: What's more important? The system behavior fulfilling the requirements or the design behind it?
      In TDD, a change in a test should come from a change in the requirements (the contract), a change in the design (refactor) should not affect the test, in practice, this is hard to get, but knowing it, changes at what level the tests are written and lowers the coupling.

    • @PetrGladkikh
      @PetrGladkikh 4 роки тому +4

      @@raulvazquez3511 good design fullfills requirements, there's no contradiction. And it is not optional, while unit tests restrict how you can (or willing to) change your system. I still fail to see why should I bang my head against every wall in the room (tests red), make random turns (bad implementation), and only then walk our of room suddenly realising where door is (gainining understanding). I prefer to look around, to see where the door is. Then write a test to cover that - the test would be a lot simpler then. Still no answer why I should write tests BEFORE the code. Also I think TDD is a misnomer. It is actually a Unit Test Driven Development, by the way.

    • @PetrGladkikh
      @PetrGladkikh 4 роки тому +5

      @@raulvazquez3511 Also I have question about word "quickly". Why? I think here is the core of disagreement, I think that people who like TDD - are the ones who _feel_ productive doing it. You create problems, you solve problems, all that brownian motion. While if you sit there thinking you are not typing anything - that arguably may not feel like a productive activity. But it is the result that matters.

    • @raulvazquez3511
      @raulvazquez3511 4 роки тому

      @@PetrGladkikh the result is what matters, of course. Some people feel safer doing small steps and constantly verifying they are on the right path.

    • @TheWandererTiles
      @TheWandererTiles 4 роки тому +5

      Can I buy you a beer? I can't agree enough. If you do TDD for most projects you will spend your entire time and budget before you have even delivered a prototype to a very unhappy customer.

  • @rv4tyler
    @rv4tyler 2 роки тому +7

    'Think about your code as an api' was my biggest take away from using TDD process. That was my biggest pivot in the way I wrote code.

  • @MrFedX
    @MrFedX 5 років тому +34

    I’ve had a hard time getting inte TDD and now I realize why. Great talk giving the philosophy and practice of TDD. I’ll read Kent Becks book right away and start testing behaviour. :)

  • @Endomorphism
    @Endomorphism 5 років тому +5

    Its always goes straight in the box of understanding when someone talk about actual philosophy of subject.
    He is talking with experience. Real Gold!!!
    THANX :-)

  • @BangsarRia
    @BangsarRia 3 місяці тому

    This is by far the most effective use of repetition and rephrasing and restating over and over again that I have ever seen. Kent Beck's book - and also Uncle Bob's teaching on TDD - are much clearer to me now, from a practice pov.

  • @paulhammond8583
    @paulhammond8583 3 роки тому +7

    The best tech talk I've ever watched. Had a huge impact on my career.

  • @ShirazEsat
    @ShirazEsat 2 роки тому +1

    Where did we go wrong? When we started making "units" small. Excellent presentation, with a lot to learn from it.

  • @maik8338
    @maik8338 2 роки тому +2

    I want to say, that this is one of the most helpful speeches on TDD or software development in general I have ever heard. Just one tiny thing...the background noice, I think it's the floor, drives me crazy.

  • @pepijnkrijnsen4
    @pepijnkrijnsen4 Рік тому +2

    I'm about halfway through TDD by example and listening to this talk again for probably the fifth time. TDD is extremely straightforward and incredibly deep at the same time. I'm still working on fully incorporating it in practice but it's SO much fun.

    • @lepidoptera9337
      @lepidoptera9337 Рік тому

      Why are you telling us that you like to torture yourself? ;-)

  • @XeonProductions
    @XeonProductions 8 місяців тому +2

    Maybe the reason I hate TDD so much is that I've always been on projects that are rapidly changing with poorly defined requirements. I very seldom encounter a project where I have good enough requirements to write tests BEFORE writing the code. I've also worked on projects where the leads are obsessed with code coverage and code smells, even though the code was still a bug ridden mess. I've spent more time writing tests, refactoring tests, debugging tests, and researching how to write tests than I ever did writing any of the code. The time and cost savings just wasn't there, and still isn't there. In my experience my untested code has been no more buggy than my tested code. You then have the massive bloat that comes along with making every single dependency in a class injectable (interface hell) so that it can be "easily testable", even though most of the dependencies you are injecting will NEVER need to have an alternative implementation during the lifetime of the code. You then have the problem of poorly written tests by inexperienced or low skilled developers, which will throw up false positives or falsely show that something is passing giving you a false sense of security.

  • @BlazingMagpie
    @BlazingMagpie 4 роки тому +59

    I was stuck for months on this question about TDD: "How are you supposed to write the tests for the methods before you know how the structure will look like?". This answers it completely.

    • @Whyoakdbi
      @Whyoakdbi 3 роки тому +5

      hahahah same man! I was like - Am I supposed to be some sort of genius to figure out in my head all the functions I need to write and before I write them, write tests for all of them, and often those functions change because I refactor the code because the initial implementation was not optimal enough..

    • @oscarmvl
      @oscarmvl 3 роки тому +1

      And what’s the answer?

    • @BlazingMagpie
      @BlazingMagpie 3 роки тому +11

      @@oscarmvl You don't write tests at small scale: you write test for program's end goal, not tests for implementation details. That way you can refactor internal details without having to also rewrite all of the tests and still be able to find when program starts to fail.

    • @oscarmvl
      @oscarmvl 3 роки тому

      @@BlazingMagpie makes a lot of sense, thanks for the quick reply!

    • @Whyoakdbi
      @Whyoakdbi 3 роки тому +1

      @@BlazingMagpie actually that's a bad idea. You can never test all possible scenarios like that

  • @ruixue6955
    @ruixue6955 3 роки тому +19

    30:43 *don't write tests to cover implementation details (because those change)*
    30:53 *write tests only against stable contract of the API*
    31:02 *Refactoring* is the key step, which enables u to *achieve the goal of separating between things like implementation details u don't need to test and the things you do need to test*

  • @gpk6458
    @gpk6458 4 роки тому +12

    This is the most important thing I've seen this year. Wish I'd seen it earlier.

    • @this-is-bioman
      @this-is-bioman 6 місяців тому +1

      It's been a while since your comment. How is your code doing today? Still into TDD or alredy cured? 😅

  • @syrus3k
    @syrus3k 6 років тому +83

    This is absolute gold and an eye opener.. thanks, thanks a million, thanks without end.

    • @davemasters
      @davemasters 4 роки тому +4

      lol @ the stick man reference

  • @TokyoXtreme
    @TokyoXtreme 6 років тому +317

    Creaking intensifies.

    • @dthompsonza
      @dthompsonza 4 роки тому +19

      sniffs :P

    • @portlyoldman
      @portlyoldman 4 роки тому +18

      Nearly drove me mad...

    • @JSmith73
      @JSmith73 4 роки тому +2

      NVIDIA RTX Voice ftw.

    • @ikeo8666
      @ikeo8666 4 роки тому +13

      why did I read this comment? the whole video is ruined now

    • @narthur157
      @narthur157 4 роки тому +3

      I had to stop

  • @manawa3832
    @manawa3832 5 років тому +17

    If your tests are brittle, that means your program is brittle. Trust me, this is a common pitfall. For example, I have a TimeWatcher object with a bunch of tests. The TimeWatcher uses Utc to compare timestamps and does some stuff. My tests reflect this. Later I decide, oh no, I need to use my Local Time. You make the change, and your tests break. This is proof that your program is brittle. What you should have done was construct the TimeWatcher with an object that abstracts away dealing with variations of time. That way, you simply add a new LocalTime object, test it with new tests, and inject it into TimeWatcher. None of your tests break. Your code needs to be open for extension, but closed for modification. If it's not, your tests will let you know.

    • @xybersurfer
      @xybersurfer Рік тому

      but then you would have to anticipate, that you might later want to use Local Time instead of UTC?

  • @johnnyw525
    @johnnyw525 2 роки тому +11

    Short version: If your tests will be broken by refactoring, you've written bad tests (generally speaking -- there's probably couple of unavoidable exceptions).

    • @this-is-bioman
      @this-is-bioman 6 місяців тому

      Every refactoring is an exception lol it's natural that tests break if you move functions around or create new classes or functions. Otherwise it wouldn't be refactoring 😂

  • @youtux2
    @youtux2 Рік тому +4

    24:50 The trigger in TDD for creating a new test is that you have a requirement you want to implement.

  • @jmrah
    @jmrah 5 років тому +16

    It's too bad he was running out of time at 57:00. I would have loved to hear more elaboration on the testing that should be done with the adapters in the Ports and Adapters model. Sure, I can test my application through my ports, but how do I know that my adapters and ports communicate correctly? How do I know, for example, that my HTTP adapter returns a 404 when my core applications throws a UserNotFoundException? Surely, I don't duplicate all my ports tests, but calling them through the adapter interface. It would have been nice to hear a little more about that part.

  • @DodaGarcia
    @DodaGarcia 3 роки тому +1

    I was apprehensive about the length of the video but once I read “ports and adapters” two minutes into it I knew I had made the right call

  • @orange-vlcybpd2
    @orange-vlcybpd2 5 років тому +56

    23:59 - "dont test implementation, test behaviour"
    34:46 - "the test becomes the first consumer of your code"

    • @ashrasmun1
      @ashrasmun1 5 років тому +1

      Can you elaborate?

    • @626Pilot
      @626Pilot 5 років тому +10

      @@ashrasmun1 I can elaborate! This video is full of terrible advice, and the best reason to watch it is to find out what not to do. If you "only test behavior" you can have a huge pile of shit in your code, which just happens to pass your "behavioral only" tests. That pile of shit will slow you down as soon as you have some new use case that your original "behavioral test" didn't account for, because your code will tend to lean into your "behavioral only" tests. Whatever they let your code get away with is "okay" because the test says so, right?
      This, of course, _is the whole point._ We _incorrectly_ think it's onerous to test 100% of our code, every line, so we try to get away with less than that. (If any at all.)
      Undisciplined coders will flatly refuse (or strenuously object, under management pressure) to write tests, no matter what anyone says. Consummate professionals who care about what happens as a result of their work will, among other things, test 100% - every line. What this video presents is ~70-80% of the way there. (I mean this literally; you will hit about 70-80% code coverage with what he suggests.)
      The last 20-30% of code that isn't tested is where much of the shit is going to hide. Some of the shit may be in your tested methods; but if you bring those un-tested methods under the control of tests, you will soon find it necessary to refactor and get rid of the shit.
      I used to do this "integration only" testing myself. It was far better than nothing, but it still left much to be desired. I kept having to go back and fix shit in the untested code, over, and over, and over again. I kept finding bad designs hiding in that untested code, and lax behavior that enabled bad designs to persist in tested methods.
      Finally, I got sick of it, and decided to try 100% coverage, just for a month, to see if I liked it. Everything got WAY better. It's so much easier to refactor when everything is tested, and so much harder to get away with a shitty design as well. If I have to go back and fix or extend this 100%-tested code, it's _easy._ Easier, by a wide margin, than that kind of work has ever been before, and I have been doing this for _decades._

    • @BlackLinerer
      @BlackLinerer 5 років тому +27

      @@626Pilot I do not agree. If you do TDD, then you can delete every line of code that is not covered by the test, because it can't be needed. --> 100%

    • @626Pilot
      @626Pilot 5 років тому +1

      @@BlackLinerer If you do TDD, you hit 100% coverage, so what isn't covered by the test is the lines you throw away. Any line that is important enough to put into production is important enough to cover.

    • @zzzfortezzz
      @zzzfortezzz 5 років тому +15

      ​@626Pilot : I'm no expert here, but I wonder. If you do 100% coverage, then when you change the way you implement the behavior, just small change, it doesn't impact the behavior, but it does break tests, I think it's unnecessary.
      Since you said it was just 1 month that you tried, I think it's still kind of too soon to tell. However, here's another clip that talk about the bad parts of TDD, the speaker's team was obsessed with code coverage, and it nearly killed their project:
      ua-cam.com/video/xPL84vvLwXA/v-deo.html
      And another article having a little part discussing about code coverage:
      medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a
      ```
      In my experience, increasing coverage beyond ~90% seems to have little continued correlation with lower bug density.
      Why would that be? Doesn’t 100% tested code mean that we know with 100% certainty that the code does what it was designed to do?
      It turns out, it’s not that simple.
      What most people don’t realize is that there are two kinds of coverage:
      - Code coverage: how much of the code is exercised, and
      - Case coverage: how many of the use-cases are covered by the test suites
      [...]
      100% code coverage does not guarantee 100% case coverage.
      Developers targeting 100% code coverage are chasing the wrong metric.
      ```
      As I said, I'm no expert, still in progress of practicing TDD. However, I'll go with the more experienced experts since their explanations are more reasonable.

  • @jerrychen2663
    @jerrychen2663 3 роки тому +17

    Since the video suggests that we should write unit tests for modules, not classes. Hence, before starting TDD, the first thing we need to do is to divide the system into modules well.

    • @OnlyForF1
      @OnlyForF1 6 місяців тому +3

      You don’t though! All you need to start test driven development is a single behaviour you’d like to verify functions as expected

  • @juliakirshina4244
    @juliakirshina4244 3 роки тому +6

    this is the best TDD talk I have ever seen !!!!

  • @alejandroagua5813
    @alejandroagua5813 4 роки тому +4

    The best way is to be flexible. Like everything in life, there are tradeoffs. There's a tradeoff between the time it takes to ship your code and make your boss happy and to do TDD. You should know when it's good to apply TDD and when it's good to selectively tests only behaviors and high-level codes. Most of the time you want to strike the right balance between time to market and the time you spend on the TDD routine. But when being duct tape programmer is needed, high level over implementation detail, behavior over implementation, functionality over structure, public API over internal classes, write features or refactor over idly watching reds and greens.

    • @John-Galt-Misfit
      @John-Galt-Misfit День тому

      Most programmers are idiot savants. They have some Guru who throws around an acronym and now they dissect and discuss that acronym to death. Meanwhile their corporate overlords make all the big bucks.

  • @olleharstedt3750
    @olleharstedt3750 2 роки тому +20

    Imagine if this guy actually had presented some numbers. Bugs per developer, bugs per change request, cost of TDD before and after his approach... We need empiricism.

    • @derhintze
      @derhintze 2 роки тому

      Yeah, I think anyone likes to replace feelings with evidence. But that's really hard to achieve. Did you have a look in "Software Creativity" by Robert L. Glass? That one's a nice, balanced book on this topic.

    • @this-is-bioman
      @this-is-bioman 6 місяців тому

      You mean numbers that are going to be manipulated, faked and staged for the sake of getting them as soon as they are tracked?

  • @sumantkanala
    @sumantkanala 6 років тому +7

    Oh boy, the part where he describes "programmer anarchy" build fast, fail fast, ship small really resonates with what I had been doing!

  • @gregbell2117
    @gregbell2117 6 років тому +10

    I wish our industry could get its story right. My understanding of the point of TDD was that it was supposed to show you when your design needed re-working. Tough/weird to test? Refactor. The test drives development.

    • @Damien-y9c
      @Damien-y9c 6 років тому +4

      It is far more subtle than that, though when you start asking *when* to test. Before each feature? Each method/function? Each line? Do you test after a feature, or method, or line? There are trade-offs everywhere. I am far more productive if I can experiment with implementation without ANY tests and then write tests to cover the *behaviour* afterwards. There is no right way... but testing absolutely should be considered an integral part of development. Guesswork otherwise.

    • @gregbell2117
      @gregbell2117 6 років тому

      I don't see how test-last is beneficial there. With a failing test written first, you're free to experiment with implementation, always knowing that it still works. That's unless you've written a test at too low of a level, so that it's sensitive to changes to your implementation.

    • @Damien-y9c
      @Damien-y9c 6 років тому +2

      It's too subtle and specific to the exact work you're doing to prescribe a "test-first" or "test-after" across the board. For example, when experimenting with frontend implementations it often isn't viable to have a high level test which will not be sensitive to the implementation - what do you hook into? words on the screen or element classes or types.. what if they change (usually do during that early phase)? I'm not going to write tests, regardless of how high level they are, which hook into an experimental HTML structure which is under constant change. It would slow me down far too much as I'd have to keep updating the tests. Again, this is highly dependent on what you are developing, as test-first absolutely does make sense in a lot of cases.

    • @gregbell2117
      @gregbell2117 6 років тому +1

      Absolutely right. I think this subtlety is missing in a lot of TDD discussion. I really do not want to feel friction when I want to refactor just because a pile of tests are going to fail if I change one little thing. I use screengrab based regression testing sometimes (ie. the changes I'm making should not affect the output). Other times I'm testing business logic, so my "unit" is a feature (a BDD/TDD mix) and I'm looking for specific values in the DOM, not place in the DOM hierarchy or other specifics.

    • @626Pilot
      @626Pilot 5 років тому +4

      This video is not about TDD. It is about the half-measures developers resort to when they find TDD difficult to work with. It rewards code that's easy to test, and punishes code that's hard to test. The secret is that code that's hard to test is _bad,_ and needs to be refactored. This secret is not known to these developers. They confuse the steep learning curve for evidence that there is something wrong with TDD. They assume it will _always_ be hard. So they come up with any excuse to skip this test, skip that test, etc. And then they write a blog or give a speech about how clever they are to have done this.
      Thinking that real TDD is too hard or not worth it is a _symptom._ It _goes away_ the more you stick with it. Being rewarded for writing easily testable code, and punished for writing code that's hard to test, forces you to give up bad habits. The habits are bad because of the fundamental truth at the center of TDD:
      *Loose coupling and testability are the same thing.*

  • @neozoan
    @neozoan 2 роки тому +1

    I particularly appreciated the bit about using unit tests to aid in building implementation details and then deleting them. I've got quite a few of those to hunt down and it makes sense, as these are often superseded by higher level tests that focus on the public interface.

    • @lepidoptera9337
      @lepidoptera9337 Рік тому

      If your public interface test is not performed by wetware, then you are doing it wrong.

  • @ismailm123
    @ismailm123 6 років тому +5

    Watched the original version of this talk a few years ago. This updated version is even better.

    • @ismailm123
      @ismailm123 3 роки тому

      @Peter Mortensen there you go 😉

  • @shantanushekharsjunerft9783
    @shantanushekharsjunerft9783 4 роки тому +5

    Basically automate the test that developers do manually anyways after implementation to verify correctness of the feature. That automated test is TDD.

    • @algimantas.stancelis
      @algimantas.stancelis 2 роки тому +3

      No. TDD is when you write test before you implement the feature.

  • @wealcome_company
    @wealcome_company 6 років тому +37

    The best talk about TDD ever, thanks!

  • @Outfrost
    @Outfrost 2 роки тому +9

    This talk feels very sparse with information. The tl;dw is that you need to test requirements (public API; module exports; the behaviour that consumers expect from your software), not classes or methods. Ian gives few reasons why this can help you thrive in TDD, but doesn't really offer examples, so for most of the talk he ends up repeating the same statement, and doesn't explain it further for those who might not intuitively understand it. He gives off the vibe of a uni professor at a lecture, pacing back and forth, preaching the same idea over and over, bragging that they know this one great truth, and the students don't. Maybe Ian doesn't make it enigmatic, but I only got 5-10 minutes of information out of this.

  • @vanivari359
    @vanivari359 Рік тому +2

    Amen. For like 10 years i have this fight again and again that this IDE-driven approach of testing (tests focussed on methods) and especially TDD is horrible and responsible for the lack of adoption. It's tedious because while my public interface is pretty much set from minute one, i rewrite my class and method structure behind it constantly until it stabilizes. But that definition of what a unit is, was completely ruined by the industry, by IDE test-class generators and also by most TDD tutorials, which usually are so simple, that it looks like the method is the unit. If some of those tutorials would have added just another class to the unit and spoke about this issue, we would not have this mess today.
    Also amen to the Gherkin-tool topic. I have not seen a single project in which the business actually cared (meaning read/write them) about those tests because most of them are not capable or interested to structure requirements precise enough even in natural language and most POs in large companies are just "managers managing external providers" anyways. So at the end, i still like to align automated acceptance tests with the customer, but i don't force my team to deal with stuff like Gherkin unless they really want to (as if that ever happens).
    BTW: not complaining about Gherkin, Cucumber etc. Its a great idea, it works very well, but it's extremely hard to get the value out of it because almost every tool that tries to pull the business deeper into the development (BPEL, BPMN, BRM etc.) fails because at the end a developer has to do the job anyways. I love BPMN frameworks, but there are like 10 business analysts on the planet able to change an executable BPMN process without breaking it.

  • @BryonLape
    @BryonLape 5 років тому +38

    It went wrong at the same point as everything else, when the professionals got ahold of it and wrote books on the subject, without ever actually doing it.

  • @Joe333Smith
    @Joe333Smith 6 місяців тому

    He's absolutely right about not testing one class or function in isolation. You test a functionality. This was always super obvious to me. But he spelled out the reasons in a little bit more detail whereas I kind of just implicitly felt it.

  • @robertmuller1523
    @robertmuller1523 2 роки тому +1

    The reason why developers tend to focus on testing methods, is that many organisations have established some kind of governance that uses code coverage or even method coverage as a central KPI. If you force developers to write at least one unit test for every single method that must achieve a code coverage of at least 75% of every single method, then you cannot complain when someone says "I have to write a unit test, because I have to add a new method".

  • @tomasnadvornik2826
    @tomasnadvornik2826 2 роки тому +1

    My man's fought so hard to keep that snot in. Respect

  • @jhaand
    @jhaand 2 роки тому

    Great talk. It mostly reflects the philosophy we used at the project I was working on in 2017.

  • @rursus8354
    @rursus8354 11 місяців тому

    Wow! I formulated my own scratch hacking method (for script writing) and tought it to my pupils, and Phase 1 and 2 sounds very similar to the Q'n'D phase of and the refactor phase of TDD. Thank you! That was interesting indeed..

  • @vekzdran
    @vekzdran 5 років тому +3

    Amazing lecture. Appreciate the hard-experience led talk and that is what gives it immense value, not the TDD/BDD but the higher understanding WHAT should be tested, i.e. the user behaviour. Fantastic, thank you.

  • @matthewclarke3989
    @matthewclarke3989 6 років тому +12

    Great talk, this is going to change how I approach TDD. My only complaint is there's a irritating/distracting sound as he walks around on stage. If that could be edited out, that would be perfect...

    • @ABCo-ABMedia
      @ABCo-ABMedia 3 роки тому

      It's the microphone rubbing against his glasses, not a whole lot can be done about it really

    • @harrievanderlubbe2856
      @harrievanderlubbe2856 2 роки тому +1

      and the runny nose

  • @fazlizekiqi2324
    @fazlizekiqi2324 4 роки тому +4

    Great lecture! Explanation about the "duck tape programmer" was funny!

  • @larserikaa
    @larserikaa Рік тому

    One of my favorite TDD talks ever! Thanks!

  • @suchoss4770
    @suchoss4770 6 років тому +3

    Well there were a lot of Bobs in our department ten years ago. Today all department is loked in maintenance...

  • @stevecarter8810
    @stevecarter8810 5 років тому +11

    35:11 I've spent nearly a decade thinking I was the only one who spotted that

    • @williamlong4112
      @williamlong4112 4 роки тому +1

      35:11 I've spent nearly a decade thinking I was the only one who spotted that

  • @harleyspeedthrust4013
    @harleyspeedthrust4013 2 роки тому +4

    A lot of this stuff seems obvious, and yet the codebases I work on are still polluted with pointless, over-mocked tests that poke far too deep into implementation details. I can think of many tech debt stories I've worked on that require some refactoring, and the refactoring breaks a bunch of these poorly written tests. So a lot of my time is spent understanding and developing these tests, and figuring out what I need to change to get them up to date. At this point I wouldn't even be surprised if someone on the team proposed tests to test our tests.

    • @spankyspork5808
      @spankyspork5808 2 роки тому +1

      That just sounds like being a programmer to me. Things change and code has to be updated, whether it's tests or not. Just because tests can be poorly written doesn't mean you shouldn't write them. I don't see the issue. If your implementation details are changing to the extent that it's that painful to refactor your code, it sounds like a design issue, not a testing issue.

  • @70ME3E
    @70ME3E 5 років тому +6

    this is gold. THIS is how testing makes sense.

    • @this-is-bioman
      @this-is-bioman 6 місяців тому

      Have you applied anything from this presentation that has changed your code? It's been a while since your comment.

  • @jasperschellingerhout
    @jasperschellingerhout 4 роки тому +1

    I believe it was UCSD Pascal that introduced the concept of unit (which I also believe was an inside the joke - Pascal is also a unit of pressure). A unit in Pascal was similar to the combination of the interface and implementation files separated as .h(pp) and .c(pp) files in C\C++. When referencing a unit you only had access to the interface section. It was essentially the "exports", the "interface", or the "header" section. You had no access to internals. The analogy with exports from Modules in Node.JS works very well, its pretty much a 1-1

  • @ApacheGamingUK
    @ApacheGamingUK 3 роки тому +1

    My first introduction to TDD was through a talk by Uncle Bob. At least, that was the first time I properly understood what "Write the tests that fail" truly meant. I was horrified by it. But this makes the whole process much more appealing. Bob's talk effectively taught "Red-Green-Red-Green-Red-Green", and the whole concept of TDD seemed to suck any amount of passion out of programming for me. It turned a vocation into a chore. It personified the meme about programmers turning coffee into code. This outside-in (London School) approach is far better than Classicist (Chicago School). Although, both can be very useful. What Ian had to skip with shifting gears was that, when you do know the answers to the problems, and you're just writing tests to conform to the standards, use London School, and build from the outside in. But, if you hit a wall, and find a problem that really doesn't make sense, then you can go Chicago School, and write classicist tests to develop the implementation step-by-step, shifting the focus from the API to the raw code. In C#, I'd start these in a new Assembly that can be unloaded later, to save the tests for posterity, while not interfering with the "production" test suite. It's always good to have a Sandbox assembly for scratchpad code.

  • @EduardsSizovs
    @EduardsSizovs 2 роки тому

    Probably one of the best videos on TDD.

  • @Bizizl
    @Bizizl 6 років тому +9

    A few points on the topic:
    1. You mention that Bob's code is unmaintainable. Try maintaining code that is written by someone which has no unit tests.
    a) The architecture will most likely be wrong - TDD (at a unit/class level) goes a long way to fixing this.
    b) There's _bound_ to be test cases that do not exist if you test at feature level. If you unit test _every_ feature at a system level, have fun with covering all cases and mocking underlying classes and services. (ergo unmaintainability)
    2. You mention tests should be fast. Well yes and no. You don't have to run your feature tests _every_ build (i.e. run before release, not every dev build). In the case of the dev build you will still have your unit test coverage.
    3. Unit testing allows us to both verify intended behaviour (this relies on good test names/structure) and also check how an object/class/function should/can be used if the tests are written in a clear manner.
    4. When red-green-refactor is mentioned at 40:00, this should be taken loosely - don't _focus_ on the patterns, methods, class structure, performance to the n'th degree, but _do_ think about it. You don't want to make it work in some hilariously hacky way then literally completely rewrite the whole damn thing. That's just silly, unless of course you're POC'ing something in which case, why are you doing TDD?..
    All the testing you suggest is good, but it's not enough on its own.

    • @DanHaiduc
      @DanHaiduc 4 роки тому

      1. Yes, that is painful. My best advice here is to use a good editor that supports automatic refactoring (which results in fewer mistakes). Move the code around and extract out the pure-functional bits, which are testable with little context, and write tests for them. As the functional parts grow, you can push the tests "higher", until they reach near-English abstraction (they clearly map to the business requirements of the app). What is left as non-functional is an imperative shell that calls all the pure-functions. Test that imperative shell using slow ATDD / Functional Tests that need longer runtimes, but all you need to test for is that it's hooked up correctly to the functional core.
      This sounds like it takes a lot of time, and it does. Only do this to code that is worth improving (i.e. you can't change the system without doing it). But you can do it gradually.
      a) TDD is not a silver bullet. Understanding Uncle Bob's Clean Architecture, together with TDD, is: [1]. In essence: the only code that may have side effects is the one at the top of the application, and looks something like this: 1. read input from whatever device/framework/web endpoint/DB/CLI 2. call pure functions on input 3. write the returned value of the function as output to whatever device/framework/....
      b) A need to mock like Frankenstein means your code is very side-effect-prone. That code is unmaintainable irrespective of TDD. Again, extract out as many pure-functional parts as possible (those can be easily tested in isolation).
      2. You have to still run the tests often enough in order to be able to integrate your work with the rest of your team's. So they do need to be fast (run at least 1-2 times daily on every dev's rebased branch).
      3. (no comment)
      4. I find myself easily distracted, usually. Doing TDD makes me much more focused - it lets me focus on getting the next use case to work, without spending time on pointless code. Many times, the code comes out faster because you are focused on it. See an experiment here: [2] - Jason Gorman writes code about twice as fast **with TDD**, in spite of having to also write tests. Therefore, you should time yourself.
      [1] The Grand Unified Theory of Software Development (by yours truly) - danuker.go.ro/the-grand-unified-theory-of-software-architecture.html
      [2] Figure 1.6 Time to completion by iterations and use/non-use of TDD - blog.howyousay.it/index.php/2017/10/14/why-unit-test/

  • @Austin9435
    @Austin9435 4 роки тому +9

    Will the creaking ever stop! This is such a good talk and it is ruined by it.

  • @HELLOWORLD-ix9eg
    @HELLOWORLD-ix9eg 4 роки тому +28

    Summary: Test the overall behavior (public API) not internal implementation details.

    • @JamesSmith-cm7sg
      @JamesSmith-cm7sg 4 роки тому +1

      In a monolith web app that's browser testing only then?

    • @cluster027
      @cluster027 4 роки тому +2

      Wish I saw this earlier. You'd have saved me an hour.

    • @dyyd0
      @dyyd0 4 роки тому

      @@JamesSmith-cm7sg You may want to look at some Uncle Bob videos on this (ModelViewController). You do not want to include UI layer in testing web app if at all possible. As Ian pointed to in his talk as well in this video, UI can change without the underlying behavior changing.

    • @JamesSmith-cm7sg
      @JamesSmith-cm7sg 4 роки тому

      @@dyyd0
      The UI should 100% be tested, but not unit testing, browser/end to end tests.

    • @gamemusicmeltingpot2192
      @gamemusicmeltingpot2192 4 роки тому

      @@JamesSmith-cm7sg api does not mean the UI or http api, it means any part of your module that is used externally, i.e public methods but not the internal implementation
      this tests all the required behaviour and relevant code without breaking all the tests if the internal implementation is changed

  • @mykytafastovets8333
    @mykytafastovets8333 5 років тому

    Incredibly useful, especially when thinking through how to implement TDD correctly after starting lean, but your product starts reaching enterprise level

  • @Gaming214-y3g
    @Gaming214-y3g Рік тому +1

    Test Behavior not implementation, this mindset could be applied to Unit Test or Integration Test. What I was hearing, Testing the API, is more toward integration test, treating it as a black box. This is essential and important, and makes the test easy to write and focus. However, depending on scenario, a complex scenario would result is many complex test scenario and very difficult to debug and know what went wrong. Imagine manually testing the API and you get some error. Do you know immediately know what went wrong most of the time? When practicing TDD and really changing it minially and incrementally, you might know what went wrong, but if a developer writes many codes, implementing many logical units then runs the test and suddenly many test scenarios failed. Or, the code already completed, the requirement didnt change, but the devs went to refactor to improve the codes. This could be very risky because single point of failure can result in many test scenarios failed.
    Furthermore, an API could be doing many checks, from other services or databases. How could the developer test without knowing what to mock or what test data to prepare? For example, the developer could be working on Order requirement which would need to ensure that the product exists, is active and probably several other criteria or condition that's more related to the Product availability, which maybe implemented in another requirement and handled in Product Service. Since the current requirement is focusing on making an Order, if one don't mock the Product Service, a lot of things can go wrong and fail the test which cause the developer debugging for no reasons. This often happen for Manual Test before Test Automation, we develop our feature, deploys them, do some manual testing and simething went wrong, we have to spend time to trace and debug and found out it has nothing to do with out code, there was condition in another service which resulted in the unexpected behavior.
    Furthermore, we used to have comolex scenarios for a requirement where there are more than 10 complex precondition checks before the process can be allowed to perform, that is just the precondition checks and lets say fir 10 preconditions, there would mean at least 20 scenario for success and failed scenario each... But wait, in we're testing at the highest level, it would also mean, we have to consider any possible dependcy on the preconditions or chains/combinations, which means the scenario would've easily been more than 20 and complex to even handle and think. Even with manual testing, we would have to prepare all the possible datas before hand and test each scenario, at the same time not to make ourself overwhelm and confused, especially when something fails, we went to debug, and make some changes, and something else fails now. This is real experience as we have devs who prefer to write high level test, the test itself ishas so many combination of scrnario that reading it is scary and confusing, the dev ended up spendong more time to debug and fix than completing the requirement which was to add addtional precondition to the existing precondition. It was a big mess.
    What we do then instead is to write unit test on each precondition, thay means a t method level. By writing at unit or method level, the focus is so much smaller and simple, yes it requires mock but we only needed to mock for what we wanted to test, we dont need to know the entire state of the object or record. The concept is just similar like how we should break stories smaller, make things small and test it and we should test behaviors.
    Lastly, just saying testing API is incorrect. If you're writing a shared or reusable method which others will be calling it. Should you ensure its tested and behave accordingly? The requirement does have to be coming from the customer.
    Similarly when working as a team delivering single user story, everyone maybe delivering parts of the feature and wouldnt it be better to ensure each parts is tested and behave according so that we would prevent something went wrong when we integrate the codes? Saying only testing API is very much about single developer mindset and a simple use case scenario. Sure, maybe to some developers they are so good that its not a problem to only test at high level even with complex scenario, but feom my experience and working with many developers, pairing and coaching, putting a bigger scope often create complexity, confusion, messy implementation, which is very difficult to read and maintain, and only testing at high levels, it doesnt require much of clean code, someone could write everything in 1 function and not worry about single rrsponsibility. The idea of making a unit testable is to make it small, single responsibility which makes the test easy to write, small scope and focused.
    For my practice and experience, our goal is that we should test the requirement, which usually is on the API which we would write integration test, but we encourage unit testing at lower levels. In the beginning, we find most devs would not write unit or low level test and focus all on integration test which is the API since its easy to write and they have many private methods, but when a scenario fails, they would end up need to know which oart of the code fails, which could actually be more than the requirement have stated, e.g. request body is incorrect, the format is incorrect and many more conditions which are more technical than the requirement stated in the user story and because there are no unit test, high level test may not cover those since its based on the requirements. Furthermore, if we were to add those, the high level test complexity will grow and lastly high level test dont exactly run fast because usually we dont mock dependencies, that would also mean every test scenario we have requires more time to run and process.
    I think and strongly believe, behavior test is not just about high level or Api test or integration. behavior is just that, behavior, a feature is a feature, if you provide a library or reusable method or deliver a working unit to another developer, you're delivering a feature that has behaviors, and you can unit test them to ensure they behave correctly. And for other dev using the unit, can choose to mock the behavior when they're unit testing their method when they are using your unit.
    E.g. you deliver a unit that provide formatting of a value, or a unit that provide 'isProductValid' that accepts a product object, you can write test at unit level so that you dont have to test at higher level, imagine if there are many API thay needs to validate product, and shoud each of the API repeat the same redundant test??? If a requirement change, then what happens?
    We often break and look at if the requirement could or make sense to be tested at lower level, especially if you have multiple layers and worked by multiple developers... Without mocking, each developer will try to test the same scenario over and over again; in fact, this just happen recently, when 2 developers are working together on the same user story, and one developer is working on a lower level layer, written some unit and tested them, and another developer which writes at rhe high level and calling the method written by the other developer, repeat the same test at high level. From the higher level, the test scenario on that method itself has over 10 scenarios. And that layer isnt even the API level yet, now do we have to repeat it again at API level, which would result in more scenario?

  • @НиколайТарбаев-к1к

    This talk is describing the original TDD concept or so called "classicist" school. There's also the new school of TDD often called "mockist". Both are opposing each other. And this talk is somewhat cliticizing the mockist approach. To get a full picture I'd recommend everyone to learn both concepts and decide for themselves which of the two could better fit their field. What works for microservice architecture might not always work for a mobile app.
    What I don't understand from this talk and the classicist approach in general is the refactoring step, which seems completely optional to me. Why would you refactor after something is already working? Because of a smell? How would you know it's a smell? Because you've read smart books? (the majority does not read books).
    Also a higher (than a class or function) level tests are more brittle. They do enable safe refactoring without changing the tests, but on the other hand require changing multiple tests should the business requirements change.
    Likewise the mockist approach drives not only the implementation but also its design. One could say it's not TDD but DDD: Design Driven Development, as you have to come up with design before writing any unit test. Although you could write a higher level test if you like to, but I prefer writing it last to not have it hanging there red all the way through. And when the business requirement changes you often need to only change a test or two and write more for new units. You can also safely reuse your code, because even the smallest unit is tested.

  • @LarryRix
    @LarryRix 2 роки тому

    3X more "test code" than "dev code" (and other problems). This is where Design by Contract (see Eiffel) shines. DbC is when your test code comes out of the test environment/harness/mock and becomes a part of the production code itself. Where the compiler is smart enough to exercise, report, and utilize your "contract assertions" (e.g. "test assertions") in everything except the production code. Even then-you can include some, part, or all of your "contract" code in various "versions" of production code (if you choose). This means you only need enough "test" (TDD) code to spin-up objects and exercise them. Therefore-test code is smaller, mocks fewer, supporting data structures are smaller and the whole "3X more test code" scenario is greatly reduced in size, scope, complexity, time, and money impact on your project. You may further dice this up by creating many small libraries, where each library lives insulated with its own production product and test environment-such that-small changes to a library do not necessarily "leak" out all over your project code base.

  • @c5n8
    @c5n8 4 роки тому +5

    TDD is not something you try on a weekend, then bring it to work the next monday, it would most probably be a disaster. It takes dicipline, like, well, any other dicipline, before you use it in real work. Uncle Bob suggest that you spare your time to practice it in your spare time building personal project, and it may take months before you do it in your work project. TDD is an extraordinary skill, and it is expected to see only few people have it, because only people with dicipline can master it.

  • @carlosmspk
    @carlosmspk 2 місяці тому

    How can you even do White Box TDD in the case where you start from red tests? This talk is what always felt the most logical and intuitive. White box tests only seem relevant to me in critical applications where you want to ensure the code is not just outputting what you want, but doing it in the way you expected

  • @benjaminpapin4312
    @benjaminpapin4312 2 роки тому

    simply the video that could make the word a better place if everyone was spending an hour of their time watching it.

  • @davidboreham
    @davidboreham 2 роки тому

    Popping in here from the trenches to say this TDD cult has been infuriating me for years to the point I considered writing an article along the same lines. Nice to see this bloke has done my work for me.

  • @ghevisartor6005
    @ghevisartor6005 Рік тому

    What a find tests a lot useful for is when i need to check how a small piece of code works without running the entire app and navigating to that page, i know there is c# interactive but many times it's quicker this way. You write the code in the test and debug it, easy done.

  • @coolashu2
    @coolashu2 3 роки тому +1

    Essentially focus on writing functional tests with concrete classes for TDD rather than doing TDD for unit testing on class level using mocks

  • @johannesprinz
    @johannesprinz 6 років тому +4

    Awesome talk! Bit of a mic drop at 1:02 DI Containers being evil, should use factories instead?!?! Love to hear more on this!

  • @neilclay5835
    @neilclay5835 2 роки тому +2

    For me this is one of the most forgotten golden rules in the industry and it's costing billions.

    • @naveengupta6878
      @naveengupta6878 2 роки тому

      exactly how ? how is it costing billions ?

    • @tomvahlman8235
      @tomvahlman8235 Рік тому

      Developers have often misunderstood the concept of JUnit tests, thinking it is about testing a class, when it is really about testing a behaviour. Then mocking wrong dependencies to the class, creating fragile tests. However mocking is a good design tool also, when used in the right way, e.g. mocking ”aggregations”/backend , sth that evolves independently, as opposed to ”has-a” dependencies, which should not be mocked because tests will then be coupled to implementation details. Developers not working test-first misses a good tool for for creating ”clean code”, improving code quality and productivity. You preferably work in baby-steps when implenting new business requirements, TDD drives this work.

  • @ayoubb-dev
    @ayoubb-dev 10 місяців тому

    I was doing unit test for the last 2 weeks in my current job for the first time . I was mocking to dependecies of the class and also spying on the implemntation . I knew that I'm doing something wrong . Nice Lecture Now I understand how to go .
    what do you thing guy about mocking repositories with in Memory database for Unit test ?

  • @mpldr_
    @mpldr_ 2 роки тому +21

    "Have programmers speak to customers"
    With most programmers I've met so far, this would be a great way of either getting rid of your customers or getting your programmers really annoyed.

    • @peterbaan9671
      @peterbaan9671 2 роки тому +2

      Yeah, you need an SME of sorts who speaks customer and speaks engineer at the same time.

    • @thibauthanson7670
      @thibauthanson7670 2 роки тому +5

      Sure, let's pile on with the stereotype, good job.

    • @peterbaan9671
      @peterbaan9671 2 роки тому +9

      ​@@thibauthanson7670 - Stereotpes exist for a reason.
      Programmers are usually not "peoples people", nor salesmen, nor managers.
      There are some scenarios where it can be beneficial to have the programmer join the meeting with the customer, but it is rarely a good idea.
      You see, customer relations is a 100% different cattle of fish compaired to programming. You need to know what can and what can't be said in front of the customer, you have to be extremely polite all the time, you have to agree to their scope while changing that scope, etc.
      Having the programmer speak to the customer is a bad idea for the same reasons as having a physician as the Minister of Health. It sounds good, but the position won't utilize the very skills that makes somebody a doctor. Some time it works, but not because the human in question is a good doctor, but because he is good at politics.

    • @jonharson
      @jonharson Рік тому +1

      Customers who reach engineering will be sacrificed to a Canaanite god, these are the rules of the game

  • @andrewreiser3584
    @andrewreiser3584 2 роки тому +3

    The thing I find remarkable about this talk is the notion that the whole industry (pretty much) has misunderstood a whole approach to development, only to waste the money of their employers.
    I only did TDD once for about 3 weeks, when I walked off the project. It was back in 2011 when all this nonsense was at its highest hype.
    The other remarkable thing is - why hasn't Kent Beck been screaming at us for a decade "You're all doing it wrongly". Why just sit back and let the industry hang itself? Very odd.

    • @andrewreiser3584
      @andrewreiser3584 2 роки тому

      @@Grauenwolf Because you engage in excess onanism.

  • @mar.m.5236
    @mar.m.5236 4 роки тому

    That talk fixes all my problems I had with TDD... Have to read the originals... TY

  • @ComradeOgilvy1984
    @ComradeOgilvy1984 Рік тому +1

    16:16 What I have noticed is that BDD gives the illusion of better quality, while costing effort and not delivering more quality in a manner that matters. Yes, it sounds nice to wave a pretty test report in front of a Product Manager or Product Owner. Yes, it sounds nice to believe that PMs could be writing some of the tests. While this might generate enthusiasm in a few early meetings, the PM and PO will soon decide they have better things to do than worry about this stuff. Now you have additional code that is your BDD abstraction layer to maintain, for no actual value delivered.

  • @trinhngo2204
    @trinhngo2204 2 роки тому

    Nice speech, thank you
    An interesting viewpoint on TDD and i totally agree about how it slows down whole development process (by wrong testing implementation)

  • @PieJee1
    @PieJee1 2 роки тому +1

    Good talk and i had the same experience with too much tdd. But i still make unit tests for classes that are low level or has no framework dependencies for example all value objects. I think you need balance even though it is less clear for junior developers when to write a unit test and when to write an integration test

  • @amitev
    @amitev 7 років тому +23

    A lot of wisdom in one hour

  • @ScottKorin
    @ScottKorin 2 роки тому +1

    Being able to talk to a db would get rid of all of the mocking I know that I end up doing in my tests.
    This is an interesting idea and one my team has been avoiding for years. So we mock our data access layer

    • @LuaanTi
      @LuaanTi 2 роки тому

      The tricky thing is that this tends to effectively _hide_ the mocking code - that test database is _also_ part of the test suite, and needs to be understood, maintained, faked, reconstructed...
      There are some ways to avoid these, and you'll find they actually fit pretty well with attempts to code better (not just test better). Once the DAL is decoupled from the business logic, you no longer need to mock the DAL - it's completely external. If your business logic _calls_ the DAL, of course you need to mock it. It's not always easy, but I find it tends to result in code that's much easier to understand and predict. It also means your tests can focus on exactly what Ian and Kent are talking about - making sure the high-lever requirements are satisfied. When product A has tax B and discount C, the result should be D. It should be completely irrelevant to the business logic how the data gets there - that's not the responsibility of the business logic! But it's also very easy to intertwine the logic and the DAL; easy to convince yourself it's necessary even.
      Of course, ultimately the main problem is that software is hard to write and maintain, and very complex. Too many people expect there to be a simple fix - simplicity is the key, yes... but that doesn't mean doing the simple thing is also easy :D It's much the same with writing anything else - it's hard to make thing that is terse and easy to understand; anyone can write a wall of text. Anyone can use superfluous flowery language.
      A good writer could write this comment in two sentences :D Which funnily enough seems to me appropriately similar - if you use automated tools to help you, and abuse them - you can always write exactly all the words I wrote as two sentences, and feel smug how you "beat the system" (or worse "achieved the goal"), just like you can chase magical 100% coverage numbers, line number counts (or lengths), split a class into four (completely interdependent) classes and pat yourself on the back on how you "lowered the dependencies" even though you did no such thing. The tools aren't good enough to catch you cheating, and often guide you in the entirely wrong direction. I especially have a peeve with dependency analysers and DRY analysers - they don't understand what they're analysing, and it just doesn't work without understanding. Just because two pieces of code share the same text values _doesn't mean they're repetitions_ . The key thing always is "when something changes, do _both_ of these have to change?" Does the code cover the same requirement, or do they cover two separate requirements that just _happen_ to be similar right now?

    • @GabrielGasp
      @GabrielGasp Рік тому

      @@LuaanTi I believe that’s exactly where Ports and Adapters come in, if your business logic only expects to receive an adapter (concrete implementation) that satisfies a port (an abstraction like an interface) to communicate with the outside world (database, rest apis, etc) you can easily create a simple mock adapter that "fits" in that port and run your tests with it.

  • @bicunisa
    @bicunisa 5 років тому +33

    Who the heck is playing with a balloon in front of the mic?

    • @motionsofttech
      @motionsofttech 4 роки тому

      bicunisa 😂😂 that weird indd

    • @colin7406
      @colin7406 4 роки тому +2

      I didn't hear it until I read this comment 30 minutes on lol... NGL I don't know if I can finish it now

  • @IlyaDenisov
    @IlyaDenisov 4 роки тому +3

    He described the problems with too granular tests that verifies small implementation details - they do exist and cause pain. Because ideas taken to the extreme are tend to cause problems. So the other extreme - having a single test per externally "observable behaviour" of the system - is painful too. And the speaker didn't seem to mention that (or am I missed it?).
    What about combinatorial explosion of execution paths? Having N simple if-s you have 2 ^ N execution paths - are you going to create matching amount of test cases for your behaviour? 256? 512? How far are you going to go on this way?
    The other aspect - test are not just to see if code works, they are also play role of the documentation with true examples. That is an important part too - it is not enough to describe a purpose of the class/method in the comment (or a name if it fits) - examples are valuable, especially for the edge cases.
    It is always a balance. You can't just run away from the thing that hurt you as far as you can - you'll got other problems there.

    • @MrMartingale1
      @MrMartingale1 4 роки тому

      "What about combinatorial explosion of execution paths?"
      You get this problem when you don't stick to SRP.

    • @IlyaDenisov
      @IlyaDenisov 4 роки тому +1

      ​@@MrMartingale1 probably. But you also get it naturally when there a lot of special cases (typically happens in the UI). You may start splitting them in more classes but that will definitely kills readability due to inadequately high indirection/abstraction level - which also have a high cost that you have to compare with the costs of alternatives (e.g. complex/strict tests maintenance discussed in the talk). Why are you so categorical in your statement? What are your arguments? Please be more informative.

    • @MrMartingale1
      @MrMartingale1 4 роки тому

      @@IlyaDenisov didn't mean to sound categorical. I tend to be laconic in YT comments. I don't know much about UI development so perhaps you're right.

  • @tocu9808
    @tocu9808 4 роки тому +1

    Be conscious of right abstraction level. And the key is to identify, properly, your 'unit' under test.

  • @BorysMadrawski
    @BorysMadrawski 4 роки тому

    It was about BDD and about how to not overkill ourselves with enforced TDD approach to have 100% coverage for every line of code in just a business code.
    Of course there is still a space for TDD if you develop/maintain a very technical code with a stable interface, like a framework (f.e. Spring), or some algorithms, when you really need to test every method and almost every possible combination of interactions and data.

  • @SnijtraM
    @SnijtraM 2 роки тому

    25:56 "something else, much higher up. And that is where you write the test". This is as close as it gets to what I preach. The trouble with automated testing is that it gets used to *replace* the skill of proper and fundamental thinking, such as *know* *what* *the* *heck* *you're* *doing* . Obsessive testing is a kind-of cargo-cult practice designed for individuals to feel both smart and important, when in reality those individuals are neither. The real danger of any ritual, if it gets decoupled from its purpose, is that narcissism quickly enters the scene and you will find yourself producing meaningless work to compensate and cover up for somebody else's severe mental feeding problems.
    Instead, literally, you start by testing *yourself* before you move on to write *any* line of code.

  • @wedotdd
    @wedotdd 4 роки тому +2

    The tests are not for the customers to read. They're for developers to read, so that we can read the behavior of what we just wrote

  • @robertkelleher1850
    @robertkelleher1850 3 роки тому +1

    Made it all the way to 41:42 and just realized why the slides are so confusing. All the slide headers are outside the frame and we can't see them.

  • @thomascook8541
    @thomascook8541 2 роки тому +1

    The "API" for an app (i.e. an Android app) is the user interface - therefore the SUT is the user interface, the contract that your code exposes to the world. So, tests like "Given I have £100 in my account, when I receive a £100 transfer from a friend, then I recieve a push notification statng that my balance is now £100" or "Given I have £100 in my account, when I transfer a £50 to a friend, then I should see an updated balance of £50"

    • @AlexeyFilippov
      @AlexeyFilippov 2 роки тому

      It's much deeper than that; also is very sad that people use the banking example under assumption that it is "simple" and "really understood" - you need the app to show £50 eventually, you need to be sure that the app won't show £0 couple hours later, you need to be sure that if you open a thousand apps and send £50 from each... something reasonable happens, but what exactly? And that's just the app, we haven't even touched any of the reporting and messaging. "Simple" and "understood."

    • @thomascook8541
      @thomascook8541 2 роки тому

      @@AlexeyFilippov are you sure you are replying to the comment you intended to reply to? I'm asking because nowhere in my comment did I state the "eventual consistency" of banking applications was "simple". But whether the implementation of eventual consistency is complex or not, the point about testing the user interface to the system (i.e. the dumb client) that I made stands correct. The SUT of a dumb client is primarily the UI. i.e. you are writing tests that validate the use case works under all states of the SUT (which is a UI). If you have an back end API which performs no verification on client transactions (i.e. allows the client to cause catastrophic failure of the overall system), then of course the testing of the dumb clients necessarily requires low level implementation testing. But, if your back end is properly implemented, the client app, whilst having a myriad of complex states it has to represent, is always showing the "truth" that the back end is maintaining. And, therefore, the system under test is, by definition, primarily UI (and not business logic). For instance, your example of "what about the case where you open a thousand apps and spend £50 from each" can be answered with "So what? The server should deal with it."

    • @AlexeyFilippov
      @AlexeyFilippov 2 роки тому

      @@thomascook8541 apologies, just agreeing with you :) and extending to include my own pet peeves. The approach in which the UI is practically excluded from the definition of "application" is sad, too.

  • @wedotdd
    @wedotdd 4 роки тому +5

    And this doesn't mean "Test everything through the UI with Cypress, Mount, etc." either. Be careful what Ian is saying to you here. He's not saying you must test everything through the UI (inverted test pyramid)

  • @JulianSildenLanglo
    @JulianSildenLanglo 2 роки тому

    16:00 This is why I want to be able to tell my test suites to allow tests to fail until they've passed once.
    Alternatively you can invert the tests by having unimplemented features go green as long as the test fails, then when it starts working you remove the "expect to not work" flag.

  • @logiclrd
    @logiclrd 6 років тому +3

    Wouldn't it help somewhat to have a difference between, "This test has never passed", implying that it is for some new implementation that hasn't been done yet, and "This test was passing but has become broken", implying that it broke as a result of someone doing something? Do any frameworks actually do that??

    • @johnastevenson265
      @johnastevenson265 6 років тому

      Yes, it would. Python's pytest framework has an xfail flag for tests that you expect to fail because you haven't implemented the feature yet.

    • @roodborstkalf9664
      @roodborstkalf9664 6 років тому +1

      Yes, sounds like common sense. What is keeping writers of test frameworks from (optionally) introducing yellow for not implemented yet ?

    • @lexmitchell4402
      @lexmitchell4402 6 років тому +1

      A number of frameworks have 'yellow' states (often yielding a partial build success). Typically these are used for tests attributed to be ignored. I have seen people use ignored tests to get code that isn't finished into the master branch sooner rather than later. It does still damage confidence as you now have code that might cause runtime issues, worse as yellow/ignored tests don't break the build they offer an avenue for mistakes/laziness to cause actual failures to go unnoticed. If possible I would avoid it.

  • @BryonLape
    @BryonLape 5 років тому +3

    My current project uses SpecFlow for mid-level integration tests. It really sucks. More code is written for them than the actual implementation and are very fragile.

  • @mortenbork6249
    @mortenbork6249 2 роки тому

    If you have a behaviour that is "stuck" in a private or an internal, DI an interface in, that executes the method.
    Now you can unit test the specifics of the method.
    Also, a private method that isn't DI'ed in, is code smell. Because you are saying this method is locked to my implementation. Amazing that you can see the future that well, that you are 100% sure that no other method will require the same implementation..
    This correct's 99% of your dependencies issues as well, because it forces you to decouple your class, basically, completely.

  • @wazum
    @wazum 2 роки тому +1

    Great talk, but the constant raising of snot is so horrible that I don't feel like listening to it for long.

  • @researchandbuild1751
    @researchandbuild1751 5 років тому

    43:48 huh great point. I always get stuck on the analysis paralysis portion these days. I think i need to get Kents book