My number one tip is to give pytest a go. It is vastly superior to the vanilla unittest module. Simply using plain assert instead of having to remember the hundred special unittest assertions saves a lot of time and is much easier to read. Eg assert MyResult == 10 Vs unittest.assertEqual(MyResult, 10) And then... Is it assertEqual or assertEquals ? Best not even waste time with vanilla.
+1 I recently got bitten by the TDD bug. I love it. It make my code base feel strong a solid no matter how big it gets!!!! Just press the test button and if it passes and if you know the tests are written well and if they pass you know your code is pretty close to being bug-free, which is a holy grail in dev-ops! (I also prefer pytest over unittest)
I have been using it for years both professionally and personally. My primary focus is on automation and serverless web applications in a business-facing development setting. I want to start off by disagreeing with the suggestion in another comment to use pytest as your test framework. I use pytest as a runner, but I would absolutely not suggest using their assertion library to write your tests. There are a lot of reasons for this including the fact that unittest is a built-in, so no added dependencies are needed. Also, what happens if the maintainers of pytest stop maintaining the package? Lastly, and this is huge, experienced developers are intimately familiar with the built in assertions. Don't underestimate the value of long descriptive assertion names, especially when your tests evolve past simple equality checks. If you can't remember what they are, all I can say is what is wrong with your IDE that it isn't offering suggestions when you type `self.assert`? I also have an issue with the presentation of TDD in the video. What you demonstrated is test-first development, but you did not follow the discipline of TDD. The flow is to write just enough code to fail one test, then make that test pass, and repeat. I understand you wanted to save some time but for people learning this for the first time, the semantics are important and should be conveyed as such. It is a discipline after all and, as such, it should be followed strictly when demonstrating the subject, even if in reality you do cut a a few corners here and there. All that said, I appreciate the work you do and anything that gets developers writing more and better tests is a win in my book! I'd love to see this turn into a series where you can go more in depth with stuff like mocking, the TPP (transformation priority premise), handling null and degenerate cases first, and ensuring that as the tests get more specific, the prod code gets more generic.
I want more videos about testing. It's a part of python that we are all lazy about but require our attention. I want to learn more. You are the best at teaching python, I really mean it. Thank you
It’s not laziness, it just depends so heavily on the culture of your workplace. For example, most consulting companies don’t write tests. It’s usually not billable so you don’t do it.
Actually, it is much better to only add a single test per red-green-refactor loop. This enable step-wise and incremental implementation of the production code. It also makes it really easy to see if you are in green since all tests pass, you don't have to remember which tests are "supposed" to fail at this step.
I do use. It makes a difference towards your security about the code behavior. Sometimes, its hard to test. But, as uncle bob says, something being testable or not is a design feature. So you modify your application so it can be better tested.
You also dont need to make every test before. Some of them appear in the middle of your project. And some tests are split up, etc. Everything is in the book, btw
If you want to use TDD, but you don't know the requirements exactly before starting the project or you don't know exactly what you want to use to structure and implement those requirements, you could take the following approach: Step 1: try to make the requirements less ambiguous and research technologies that could help achieve those requirements Step 2a: make a proof of concept (no tests, rapid dev cycle, experiment with the technologies to see what works best, iterate to find the best way to structure the program) Step 2b: demo the PoC (verify if the requirements are met, find things that need more attention or need to be faster. Go back to step 2 if necessary) Step 3a: create a NEW repo, DON'T copy over the code from the poc, lay down the decided upon framework for the program, without actually adding code. Use the PoC only for reference/"lessons learned" from now on. (You don't want untested and possibly buggy PoC code in production) Step 3b: lay down your high level tests (itest/e2e) based on the entry/input points of your program. Step 3c: starting at the entry point to the application (REST/SOAP/JMS/gRPC/event/CLI call/etc.), lay down the interface/code skeleton for that feature and create your tests, TTD style! (Red phase) Step 3d: implement the tests. (Green phase). if the implementation defines more methods, don't implement those yet, unless they are simple and the existing test fully covers these methods. Mock where necessary. Step 3e: refactor your code. Keeping your framework in mind when doing this and ask yourself if the framework still suits your needs the way it is now. Step 3f: recursively go back to step 6 for all unimplemented methods. Step 3g: verify if all methods are implemented and if the itests / e2e tests are now succesful. Step 4: deploy your program Feel free to correct me if I missed something or give your opinion if you disagree with this approach.
I agree that this is, in my opinion at least, a great approach. Mainly because it does not waste time coming up with throwaway tests before the design and interfaces have been fully fleshed out, because having to rewrite tests that are no longer compatible with the evolving design is a major waste of time. Because writing useful tests is often time consuming, often because building up a dummy input dataset is not trivial. The downside of your approach, however, is that it requires a significant time commitment of its own, in the rewrite step. Building a functional PoC and then starting over again to do it "right" is ideal, in my opinion, but it is rarely an acceptable approach to stakeholders and product owners. As soon as stakeholders see a functional demo, they expect it to be in production within a couple of days. Drives me nuts.
Great video Arjan! I really like how you also discuss the drawbacks of TDD and that you need to balance the whole approach with your other activities and goals. Personally, I think TDD is an interesting testing and software development technique to have in the toolbox. However, as you pointed out there are more testing approaches to consider. In non-trivial projects unit testing is not enough.
Very nice Arjan, as always. Just my two cents here, maybe it was only for the sake of didactics but I believe the best approach to TDD is baby steps. So it's better to write a single test, make it pass and, refactor and then go to the next. Also, when you talked about having someone who writes tests and someone who writes the code, well, in a steady environment, developers should write tests, specially when we are talking about TDD, but we still should have QAs who would: 1. Pair with devs to think about test cases and test strategies 2. Validate those tests after they're done. Nonetheless, I've been referring your videos to my friends. You're great!!!
Hi Jon - thanks! My main background is from software development in startups, and sometimes that means you're the only developer (no money for a Q&A team :) ), and I do like to write a few tests at the same time as I find it's more efficient. But you definitely have a point in that you want to avoid writing too many tests at the same time before making them pass and refactoring.
I write embedded C, but have started writing TDD for the Python tools I create. I much prefer the fine-grained approach. One test at a time lets the test/spec guide the design better and puts fewer details in the air (avoiding confusion). Arjan seemed to already have a solution in mind for his example; maybe that was a side-effect of preparing to make the video.
It would be great to have a good explanation about unit testing for Models and database interactions, it's a topic in unit test that I never see anyone tackle in a simple intuitive way.
I will be exposed to TDD a bit, I will consider all those items you mentioned thanks. The best thing is yes when you apply test first then code approach you really don't make functions complicated and you write better testable code..
Coverage goals are great for unit testing, but can lead to overconfidence with integration or end-to-end tests. Just because a test verifies something is correct, doesn't mean that it's being used correctly somewhere else.
Coverage goals are awful. The only valid goal is 100%. If you have anything below that then what you mean is that it's okay for the untested code to just not work, in which case it may be better to remove it altogether.
100% coverage is silly in practice. There're all sorts of things like debugging flags that are generally senseless to test fully. It also leads to both bloated tests and overconfidence that the code is now free of bugs. Not to mention there are many common dynamic compsci practices that can be mislabeled as covered, but all branches will not be tested. Lookup tables being a simple example.
I've got two questions. I would appreciate if you assist me there. 1. In TDD, we write tests first. What if we implement a feature that is logically implementable and then we write tests and then improve the functionality? 2. Is it ok to create a new object in the `setUp` and then check the equality of each field of the created object with a valid sample object? Like firstly, I declare a variable. Then I create a new object with that variable's value then I check the equality of the created object's value with the declared one.
Nice intro on TDD! However, It would be great to have in depth video on lean, fast and efficient unit testing (pytest/unittest) with patching, fixtures setups/teardowns and caching (requests are expensive !)
Thanks and noted! This video was merely a starting point for me. I think quite a few people would like to see more content on testing, so that's a good excuse for me to dive into these things in the coming months!
Thanks Arjan! Great video, comes at just the right time since Im getting imto TDD. I came un with a question from the bad practices part; If i needed to test different functionality of the same class hence almost using a pretty much equal instance in every test, what if I made a dummy test instance with an added reset method I could call at every test to not rewerite everything and still have tests decoupled? Would that be recommended or should I just copy an object instead 😅
Hi Sebastian, thanks and good to hear! Regarding your question, you can look into using the setUp method. This is called before each unit test is run, so you can write your instance initialization code there and it'll be available in all tests, without having to reset the object yourself.
Great video as always Arjan! Keep it up! I've been using TDD in my personal projects for 2 years now and it's one of the best approaches in my opinion. In my case, I use Robot Framework for testing as allows lots of flexibility, and was curious to ask if you have used it in the past and what is your opinion about it? Cheers
Great video, but can you tell why you are using unittest instead of Pytest? Is there any reason to switch?
2 роки тому+1
Given the overhead, does TDD's benefits carry over to small personal projects, especially when it may be a single person working on said project, meaning the same person writing all the code and tests? If not, at what project size should TDD be seriously considered? I've learned about TDD previously in a programming course. It was a completely new workflow for me and I found it really cool, but it seems like overkill to me for small projects.
Of course, sometimes I just through things together, but I really prefer using TDD even for small personal projects because of the help with design and staying on track it gives. By adding just a single test in each round the step to green with some small new functionality is also small and takes almost no extra time. Especially if you consider the extra debugging time you otherwise (almost) always will have later ;-)
Would you recommend testing with data in files? I mean if you're just looking an unt output, you can hardcore this in the unit test, but if you generate a large dataset should it be numpy or pandas, my approach is to save known good data results to a file in the test folder and have the unit test compare to that.
12:55 - aren't you testing to make sure the incoming object's defaults are the same as the mock-class object you've created there on line 20 ? Where is it in the standard library to test the values? (you're testing the values not the values' types)
I still don't know how to properly write tests for code and sometimes code is so complex, e.g. use requests to AWS or require some messaging in RabbitMQ to deal with response later, that I always asking team lead with helping to write tests for my case. There is some mock magic, that I particulary not understand right.
I think most devs need mocks a lot more frequently than TDD enthusiasts usually highlight. I agree mocking is hard to understand; you have to think a lot about when exactly how namespaces and imports work, which doesn't sound like a big deal, but . . . . I get hung up over and over and I can tell on stack overflow at least 4 other people get confused too. "Sometimes code is so complex" -- one of the benefits is it pushes you to organize your code so that you CAN put in testable mocks. To be a purist, you're not writing tests for existing code, or even broken/suspect code.
Hoi Arjan, de basis van OOP heb ik onder de knie, maar nou zie ik jou telkens wat meer advanced dingen toepassen. Waar heb jij deze geleerd? Of waar kan ik deze leren?
Hi Arjan, great videos as always, wish I had watched this a year ago. After watching this video, I have done a little more research my own on TDD, then I come across another method/framework called BDD (behaviour driven development). Have you had any experience on it? Just curious if you have any preference over either one and why, thanks!
I'm interested in how to design complexe setups for tests. My main method is to go with pytest fixtures but there are handle dynamically and I hate how IDE can't identify them (for example impossible to ctrl click definition to find the fixture definition in vscode)
Nice video. My biggest shock with TDD was when I saw Uncle Bob in his clean code video. There he would have had 5 of those cycles for 5 unit tests. I'm curious about your opinion, when would you commit in your cycle?
Yeah, I think the traditional TDD method indeed splits this up per test. I do that as well, but in some cases I like to group writing a few tests at the same time, if the cases are related to each other. I find it's a bit more efficient because it requires less switching between contexts. In videos where I talk about testing, I also sometimes group test writing, but mainly to keep the video story a bit more easy to follow.
@@ArjanCodes I tend to keep the idea of those extra test cases on my notepad until needed. I might not actually need them, or want to do some other tests first.
You said write test first, then write code, then how come you have all the methods and attributes and instance name of your actual object and funtionalites, everything handy ? or atleast you had it's structure crafted...before writing the test.
Would you say that this architecture is a stronger enforcement of the idea of, say, writing the docstring for a function before the code? They both enforce the main idea of a method/class so that when writing the code so you don't begin moving in unrelated and possibly incorrect tangents
Most of the time, unit tests are far more efficient with a large amount of test cases. Docstrings are better for documentation which you can do in tangent with the unit tests.
The main reason I used unittest here is that it's built right into Python, and it's pretty capable. I might do a comparison video though to look at a few different testing frameworks and see what their strengths and weaknesses are.
@@ArjanCodes You really should abandon Unittest, pytest offers so much more and requires writing less code for each test case. The argument for "because it is built into Python" simply doesn't hold water. pip install pytest is no effort at all.
I know this isn't the topic at hand but I didn't know you could explicitly declare types in Python until watching this video. After reading up a bit, I see Python doesn't enforce types but some IDE's will call out type differences if types are annotated in your code the way Arjan does here. Since he's using VS Code, is that the benefit he gets in annotating types in his code here? Edit: I guess he's doing it because he's using dataclasses?
You're right that dataclasses need to know the type in order to do their job. In VSCode I use Pylance and if you add type hints to your code, Pylance helps point out any issues, which I find quite helpful. It also helps with autocomplete if the type of for example a function argument is known.
Pycharm can give completion to instances passed into a function if you do type annotation. I haven't looked at python linters but they obviously could leverage it.
My experience of trying to set up pytest has been full of rage. If I screw up a test, all of my tests disappear. Using *_test.py sometimes works. The pyTest docs seem to be written for someone who just needs a reminder, not someone new to python. It feels like I have to become an expert in pyTest before I can even get more than the most basic hello world test to run.
I love how we are both using python but your code hardly looks python especially the strict typing. Do you think this is necessary, should I try to change my habits. I get it in big projects but do you always do this
I’ve worked on teams that do this. Development is extremely slow and whenever a stakeholder changes requirements it leads to huge amounts of tests that are broken and now you’re fixing the tests and the code. Which is all fine, but your stakeholders better understand why you’re doing it up front or it’s just going to look like you’re moving slow.
I get that. Being very careful with the software design and making sure things are easy to change in the future is particularly important if you use TDD. And in the beginning, there's no need to reach 100% coverage, you just need to cover the basic functionality in your tests. But I fully agree it's crucial to have everyone on board with the development process you're following.
There are a number of traps with TDD (as with almost everything), one is to have a lot of tests that are coupled to implementation and not just verifying behaviour. If you have fallen into that trap, as soon as your implementation changes a lot of tests brake. They shouldn't as long as the external behaviour stays the same. If you change the external behaviour needs to change, the test will too. And I'd say the only way to avoid that is to have no tests, something I would definitely not recommend ;-)
I've seen someone use this approach and it was absolutely miserable to watch them. The problem is that people tend to not think about the problem at hand enough, and only focus on making tests pass. This can lead you to an over-complicated implementation, and possibly to a dead end. My approach is to take time to think how the problem can be approached, then settle on the best option and implement it first. Then, using that initial implementation, I would use the red-green-refactor method.
I really struggle with this TBO... I don't know how to find the time to do the tests first... not the mention get 100% coverage on the code base. TDD is a great methodology and would love to learn TDD first approach... but it is hard, and convincing colleagues that this is the way to go is hard... justifying time spent on this... is hard...
Look at it this way. If you write a piece of code, you have to make sure it works, right? So, you’re going to need to test it to some degree. You don’t have to go for 100% coverage. Just write the few tests that cover the most important cases. That won’t be more work, because you have to do it anyway. The difference now is: write those tests before you write the code that they test and see how it goes. From that point on, you’ll have an automatic *system* that you can extend and improve as you wish, which is a huge win.
@@ArjanCodes I think the part you mentioned where we have to build a product and get into the hands of the customer as soon as possible is probably where we are at... we have to go through refactoring in our entire stack once we meet a critical milestone... I think this will be an important step to introduce TDD homogeneously to the team and enforce some behaviors that promote TDD. I personally would like to learn more from you about the process of using TDD effectively create a continuous delivery culture.
No particular reason, except that it does the job and Python ships with it. And that makes my examples more accessible. I will probably also look into Pytest in the future though.
It would be more convincing if the demonstrations of using unit testing did something more complicated than adding two number together or similar. All the examples I have seen are so simple writing a unit test seems overkill - and time wasted. When I try to use them in my applications, testing anything significant requires almost another application to set up a testing environment, for example complex workflows using large databases. I prefer to set up test data then calculate expected outcome and test to destruction (i.e. try to break the code)
The problem is that using something more complicated detracts from the main topic of the video, which is unit testing, so I don't want to spend more time than needed explaining the example code. The code is complex enough to require several different cases and simple enough to still be easy to follow for everyone. The goal of the video is to explain how test-driven development works, in my opinion the example is fine for that purpose.
@@ArjanCodes it might be a good video in the future. This video is the INTRODUCTION video. But going to the next level if you do a code kata from start to finish using TDD. Might I suggest the Roman Numerals kata, that is a really good one for TDD. Doing this in pytest instead of unittest will be simpler and less code as well Keep up the amazing work! Love the vids!
Maybe it would have been better if you haven't any code written at start and if you wrote the tests one by one and not all of them at the start. Anyway, great video!
Ideally, that would have been best. I had that in the first version when I recorded the video, but the test writing part became too long with regards to the rest, so I decided to do it this way instead.
8:12 umm, should an employee know all of these "employer" things? I think this violates high cohesion - an employee wouldn't be told how much an office costs or etc. Also, this looks like a code smell called data clumping, where the prefixes of the instance field names are all the same, and should therefore be extracted into it's own Employer class. This class would also have the calculate payout method, rather than having it in the Employee class
Who says that this is stuff that the employee knows? I mentioned in the video that this could be part of an HR system, so an employee may not even have access, only HR people. You could separate out the costs into a different datastructure. I wanted to keep the example not too complicated, because that’s not the focus of this video, so I decided to use a single class here. Creating these examples is always a trade-off.
Who's using TDD regularly in Python? If you have any tips to share, please post them here!
My number one tip is to give pytest a go. It is vastly superior to the vanilla unittest module.
Simply using plain assert instead of having to remember the hundred special unittest assertions saves a lot of time and is much easier to read.
Eg
assert MyResult == 10
Vs
unittest.assertEqual(MyResult, 10)
And then... Is it assertEqual or assertEquals ?
Best not even waste time with vanilla.
@@virtualraider +1 think also pytest is superior with less boilerplate code. Maybe Arjan can compare those two frameworks in future videos.
+1
I recently got bitten by the TDD bug. I love it. It make my code base feel strong a solid no matter how big it gets!!!! Just press the test button and if it passes and if you know the tests are written well and if they pass you know your code is pretty close to being bug-free, which is a holy grail in dev-ops!
(I also prefer pytest over unittest)
I'm using TDD but not in Python; but in Java.
I have been using it for years both professionally and personally. My primary focus is on automation and serverless web applications in a business-facing development setting.
I want to start off by disagreeing with the suggestion in another comment to use pytest as your test framework. I use pytest as a runner, but I would absolutely not suggest using their assertion library to write your tests. There are a lot of reasons for this including the fact that unittest is a built-in, so no added dependencies are needed. Also, what happens if the maintainers of pytest stop maintaining the package? Lastly, and this is huge, experienced developers are intimately familiar with the built in assertions. Don't underestimate the value of long descriptive assertion names, especially when your tests evolve past simple equality checks. If you can't remember what they are, all I can say is what is wrong with your IDE that it isn't offering suggestions when you type `self.assert`?
I also have an issue with the presentation of TDD in the video. What you demonstrated is test-first development, but you did not follow the discipline of TDD. The flow is to write just enough code to fail one test, then make that test pass, and repeat. I understand you wanted to save some time but for people learning this for the first time, the semantics are important and should be conveyed as such. It is a discipline after all and, as such, it should be followed strictly when demonstrating the subject, even if in reality you do cut a a few corners here and there.
All that said, I appreciate the work you do and anything that gets developers writing more and better tests is a win in my book! I'd love to see this turn into a series where you can go more in depth with stuff like mocking, the TPP (transformation priority premise), handling null and degenerate cases first, and ensuring that as the tests get more specific, the prod code gets more generic.
I want more videos about testing. It's a part of python that we are all lazy about but require our attention. I want to learn more.
You are the best at teaching python, I really mean it. Thank you
This.
Will do! And thank you so much!
I concur. I'm usually too lazy to write tests. However, once I'm neck-deep in spaghetti......
It’s not laziness, it just depends so heavily on the culture of your workplace. For example, most consulting companies don’t write tests. It’s usually not billable so you don’t do it.
Actually, it is much better to only add a single test per red-green-refactor loop. This enable step-wise and incremental implementation of the production code. It also makes it really easy to see if you are in green since all tests pass, you don't have to remember which tests are "supposed" to fail at this step.
I'd appreciate a video about mocking, an extremely useful technique which really takes your testing capabilities to the next level
Mock it until you make it . That's like the biggest first rule to follow. :)
Will do!
This channel has quickly become my favorite in all things Python.
Thank you, glad you like it!
I do use. It makes a difference towards your security about the code behavior. Sometimes, its hard to test. But, as uncle bob says, something being testable or not is a design feature. So you modify your application so it can be better tested.
You also dont need to make every test before. Some of them appear in the middle of your project. And some tests are split up, etc. Everything is in the book, btw
If you want to use TDD, but you don't know the requirements exactly before starting the project or you don't know exactly what you want to use to structure and implement those requirements, you could take the following approach:
Step 1: try to make the requirements less ambiguous and research technologies that could help achieve those requirements
Step 2a: make a proof of concept (no tests, rapid dev cycle, experiment with the technologies to see what works best, iterate to find the best way to structure the program)
Step 2b: demo the PoC (verify if the requirements are met, find things that need more attention or need to be faster. Go back to step 2 if necessary)
Step 3a: create a NEW repo, DON'T copy over the code from the poc, lay down the decided upon framework for the program, without actually adding code. Use the PoC only for reference/"lessons learned" from now on. (You don't want untested and possibly buggy PoC code in production)
Step 3b: lay down your high level tests (itest/e2e) based on the entry/input points of your program.
Step 3c: starting at the entry point to the application (REST/SOAP/JMS/gRPC/event/CLI call/etc.), lay down the interface/code skeleton for that feature and create your tests, TTD style! (Red phase)
Step 3d: implement the tests. (Green phase). if the implementation defines more methods, don't implement those yet, unless they are simple and the existing test fully covers these methods. Mock where necessary.
Step 3e: refactor your code. Keeping your framework in mind when doing this and ask yourself if the framework still suits your needs the way it is now.
Step 3f: recursively go back to step 6 for all unimplemented methods.
Step 3g: verify if all methods are implemented and if the itests / e2e tests are now succesful.
Step 4: deploy your program
Feel free to correct me if I missed something or give your opinion if you disagree with this approach.
I agree that this is, in my opinion at least, a great approach. Mainly because it does not waste time coming up with throwaway tests before the design and interfaces have been fully fleshed out, because having to rewrite tests that are no longer compatible with the evolving design is a major waste of time. Because writing useful tests is often time consuming, often because building up a dummy input dataset is not trivial.
The downside of your approach, however, is that it requires a significant time commitment of its own, in the rewrite step. Building a functional PoC and then starting over again to do it "right" is ideal, in my opinion, but it is rarely an acceptable approach to stakeholders and product owners. As soon as stakeholders see a functional demo, they expect it to be in production within a couple of days. Drives me nuts.
Came red to TDD video, turned green while watching it, and now I am reformatting myself as a person (perhaps back to red).
Great video Arjan! I really like how you also discuss the drawbacks of TDD and that you need to balance the whole approach with your other activities and goals. Personally, I think TDD is an interesting testing and software development technique to have in the toolbox. However, as you pointed out there are more testing approaches to consider. In non-trivial projects unit testing is not enough.
For generating types of test data, pytest + hypothesis are a good combo. You can even cover None, nan and other outlier cases easily
Very nice Arjan, as always.
Just my two cents here, maybe it was only for the sake of didactics but I believe the best approach to TDD is baby steps. So it's better to write a single test, make it pass and, refactor and then go to the next.
Also, when you talked about having someone who writes tests and someone who writes the code, well, in a steady environment, developers should write tests, specially when we are talking about TDD, but we still should have QAs who would:
1. Pair with devs to think about test cases and test strategies
2. Validate those tests after they're done.
Nonetheless, I've been referring your videos to my friends. You're great!!!
Hi Jon - thanks! My main background is from software development in startups, and sometimes that means you're the only developer (no money for a Q&A team :) ), and I do like to write a few tests at the same time as I find it's more efficient. But you definitely have a point in that you want to avoid writing too many tests at the same time before making them pass and refactoring.
@@ArjanCodes been there. I also like to write tests. To be honest, I can't work without it anymore.
Great videos Arjan, keep it up! :D I would love to see some videos about software testing and test automation also with python.
I write embedded C, but have started writing TDD for the Python tools I create. I much prefer the fine-grained approach. One test at a time lets the test/spec guide the design better and puts fewer details in the air (avoiding confusion). Arjan seemed to already have a solution in mind for his example; maybe that was a side-effect of preparing to make the video.
I would love a video on pytest! I have been using it recently but I feel like I would get a better idea with your input. Thanks for the video!
Thanks for the suggestion!
TDD is especially beneficial for early stage development. Your tests will tell you if the feature is not what you want.
It would be great to have a good explanation about unit testing for Models and database interactions, it's a topic in unit test that I never see anyone tackle in a simple intuitive way.
Great suggestion!
I will be exposed to TDD a bit, I will consider all those items you mentioned thanks.
The best thing is yes when you apply test first then code approach you really don't make functions complicated and you write better testable code..
Thank again Arjan. Looking forward to Mocks and Patch videos ... I particularly find them complicated.
I would like to see you performing code roasts, or refactoring videos using TDD.
Coverage goals are great for unit testing, but can lead to overconfidence with integration or end-to-end tests. Just because a test verifies something is correct, doesn't mean that it's being used correctly somewhere else.
Indeed, good point!
Coverage goals are awful. The only valid goal is 100%. If you have anything below that then what you mean is that it's okay for the untested code to just not work, in which case it may be better to remove it altogether.
100% coverage is silly in practice. There're all sorts of things like debugging flags that are generally senseless to test fully. It also leads to both bloated tests and overconfidence that the code is now free of bugs.
Not to mention there are many common dynamic compsci practices that can be mislabeled as covered, but all branches will not be tested. Lookup tables being a simple example.
I've got two questions. I would appreciate if you assist me there.
1. In TDD, we write tests first. What if we implement a feature that is logically implementable and then we write tests and then improve the functionality?
2. Is it ok to create a new object in the `setUp` and then check the equality of each field of the created object with a valid sample object? Like firstly, I declare a variable. Then I create a new object with that variable's value then I check the equality of the created object's value with the declared one.
Great introduction to testing video. Useful to have the pros and cons
Glad you liked it, James!
Nice intro on TDD! However, It would be great to have in depth video on lean, fast and efficient unit testing (pytest/unittest) with patching, fixtures setups/teardowns and caching (requests are expensive !)
Thanks and noted! This video was merely a starting point for me. I think quite a few people would like to see more content on testing, so that's a good excuse for me to dive into these things in the coming months!
@@ArjanCodes Indeed, I think unit test on python requests could be extremely useful for a lot of people! Keep It up!
Thanks Arjan! Great video, comes at just the right time since Im getting imto TDD.
I came un with a question from the bad practices part; If i needed to test different functionality of the same class hence almost using a pretty much equal instance in every test, what if I made a dummy test instance with an added reset method I could call at every test to not rewerite everything and still have tests decoupled? Would that be recommended or should I just copy an object instead 😅
Hi Sebastian, thanks and good to hear! Regarding your question, you can look into using the setUp method. This is called before each unit test is run, so you can write your instance initialization code there and it'll be available in all tests, without having to reset the object yourself.
I find your tutorials so easy to understand. Could you do something in continuous integration and deployment?
Thanks for the suggestions, I've put it on the list.
Glad you liked the video!
Great video as always Arjan! Keep it up!
I've been using TDD in my personal projects for 2 years now and it's one of the best approaches in my opinion.
In my case, I use Robot Framework for testing as allows lots of flexibility, and was curious to ask if you have used it in the past and what is your opinion about it?
Cheers
Thanks Miguel, glad you like it. I don't have experience with Robot, but I'll take a look - thanks for the suggestion!
Good explanation) Really useful
Glad you liked it, Nate!
Thank you very much for this!
Thanks so much Yashar, glad you liked it! :)
Great video, but can you tell why you are using unittest instead of Pytest? Is there any reason to switch?
Given the overhead, does TDD's benefits carry over to small personal projects, especially when it may be a single person working on said project, meaning the same person writing all the code and tests? If not, at what project size should TDD be seriously considered?
I've learned about TDD previously in a programming course. It was a completely new workflow for me and I found it really cool, but it seems like overkill to me for small projects.
Of course, sometimes I just through things together, but I really prefer using TDD even for small personal projects because of the help with design and staying on track it gives. By adding just a single test in each round the step to green with some small new functionality is also small and takes almost no extra time. Especially if you consider the extra debugging time you otherwise (almost) always will have later ;-)
Would you recommend testing with data in files? I mean if you're just looking an unt output, you can hardcore this in the unit test, but if you generate a large dataset should it be numpy or pandas, my approach is to save known good data results to a file in the test folder and have the unit test compare to that.
12:55 - aren't you testing to make sure the incoming object's defaults are the same as the mock-class object you've created there on line 20 ?
Where is it in the standard library to test the values? (you're testing the values not the values' types)
Do a video on Test && Commit || Revert (TCR)
Hello. Could you make video with pytests examples?
I still don't know how to properly write tests for code and sometimes code is so complex, e.g. use requests to AWS or require some messaging in RabbitMQ to deal with response later, that I always asking team lead with helping to write tests for my case. There is some mock magic, that I particulary not understand right.
I think most devs need mocks a lot more frequently than TDD enthusiasts usually highlight. I agree mocking is hard to understand; you have to think a lot about when exactly how namespaces and imports work, which doesn't sound like a big deal, but . . . . I get hung up over and over and I can tell on stack overflow at least 4 other people get confused too.
"Sometimes code is so complex" -- one of the benefits is it pushes you to organize your code so that you CAN put in testable mocks. To be a purist, you're not writing tests for existing code, or even broken/suspect code.
Hoi Arjan, de basis van OOP heb ik onder de knie, maar nou zie ik jou telkens wat meer advanced dingen toepassen. Waar heb jij deze geleerd? Of waar kan ik deze leren?
Hi Arjan, great videos as always, wish I had watched this a year ago. After watching this video, I have done a little more research my own on TDD, then I come across another method/framework called BDD (behaviour driven development). Have you had any experience on it? Just curious if you have any preference over either one and why, thanks!
thanks for the video Arjan! I teach testing and TDD, your video is extremely clear and easy to understand, I'll recommend it to my students. Thanks!
Thanks John, glad you liked the video!
I'm interested in how to design complexe setups for tests. My main method is to go with pytest fixtures but there are handle dynamically and I hate how IDE can't identify them (for example impossible to ctrl click definition to find the fixture definition in vscode)
Loved this thank you
Noob question: how did you refactor your code on the fly with a shortcut? Is that a custom shortcut running black?
Nice video. My biggest shock with TDD was when I saw Uncle Bob in his clean code video. There he would have had 5 of those cycles for 5 unit tests. I'm curious about your opinion, when would you commit in your cycle?
Yeah, I think the traditional TDD method indeed splits this up per test. I do that as well, but in some cases I like to group writing a few tests at the same time, if the cases are related to each other. I find it's a bit more efficient because it requires less switching between contexts. In videos where I talk about testing, I also sometimes group test writing, but mainly to keep the video story a bit more easy to follow.
@@ArjanCodes I tend to keep the idea of those extra test cases on my notepad until needed. I might not actually need them, or want to do some other tests first.
If TDD is basically setting requirements in advance, is it then a waterfall process in disguise?
You said write test first, then write code, then how come you have all the methods and attributes and instance name of your actual object and funtionalites, everything handy ? or atleast you had it's structure crafted...before writing the test.
How can I get some of the benefits of test driven development when writing code that's experimental in nature, like is often the case in data science?
your videos are the best 👍👍
Thank you Kevin, glad you like them!
Would you say that this architecture is a stronger enforcement of the idea of, say, writing the docstring for a function before the code? They both enforce the main idea of a method/class so that when writing the code so you don't begin moving in unrelated and possibly incorrect tangents
Most of the time, unit tests are far more efficient with a large amount of test cases. Docstrings are better for documentation which you can do in tangent with the unit tests.
13:37 is that a forgotten print? (another testing mistake btw, also 1337 :)
Hello, is there a reason you use default tests and not a framework like pytest. Do you have a preference?
The main reason I used unittest here is that it's built right into Python, and it's pretty capable. I might do a comparison video though to look at a few different testing frameworks and see what their strengths and weaknesses are.
@@ArjanCodes You really should abandon Unittest, pytest offers so much more and requires writing less code for each test case. The argument for "because it is built into Python" simply doesn't hold water. pip install pytest is no effort at all.
I know this isn't the topic at hand but I didn't know you could explicitly declare types in Python until watching this video. After reading up a bit, I see Python doesn't enforce types but some IDE's will call out type differences if types are annotated in your code the way Arjan does here. Since he's using VS Code, is that the benefit he gets in annotating types in his code here?
Edit: I guess he's doing it because he's using dataclasses?
You're right that dataclasses need to know the type in order to do their job. In VSCode I use Pylance and if you add type hints to your code, Pylance helps point out any issues, which I find quite helpful. It also helps with autocomplete if the type of for example a function argument is known.
Pycharm can give completion to instances passed into a function if you do type annotation. I haven't looked at python linters but they obviously could leverage it.
Which domain do you use?
My experience of trying to set up pytest has been full of rage. If I screw up a test, all of my tests disappear. Using *_test.py sometimes works. The pyTest docs seem to be written for someone who just needs a reminder, not someone new to python. It feels like I have to become an expert in pyTest before I can even get more than the most basic hello world test to run.
I was the 610th like on the video. Fibonacci would be pleased. Thank you for these videos.
Haha, you’re most welcome!
I love how we are both using python but your code hardly looks python especially the strict typing. Do you think this is necessary, should I try to change my habits. I get it in big projects but do you always do this
Logic inside a test at 12:50 .... ouch 😮 Never do that. But a great vid though ❤
Cool video
I’ve worked on teams that do this. Development is extremely slow and whenever a stakeholder changes requirements it leads to huge amounts of tests that are broken and now you’re fixing the tests and the code.
Which is all fine, but your stakeholders better understand why you’re doing it up front or it’s just going to look like you’re moving slow.
I get that. Being very careful with the software design and making sure things are easy to change in the future is particularly important if you use TDD. And in the beginning, there's no need to reach 100% coverage, you just need to cover the basic functionality in your tests. But I fully agree it's crucial to have everyone on board with the development process you're following.
> or it’s just going to look like you’re moving slow.
you *are* moving slow... slow *and* steady :-)
There are a number of traps with TDD (as with almost everything), one is to have a lot of tests that are coupled to implementation and not just verifying behaviour. If you have fallen into that trap, as soon as your implementation changes a lot of tests brake. They shouldn't as long as the external behaviour stays the same. If you change the external behaviour needs to change, the test will too. And I'd say the only way to avoid that is to have no tests, something I would definitely not recommend ;-)
Anyone else prefer to have the expected value before the function call: assert expected == func()?
I've seen someone use this approach and it was absolutely miserable to watch them. The problem is that people tend to not think about the problem at hand enough, and only focus on making tests pass. This can lead you to an over-complicated implementation, and possibly to a dead end.
My approach is to take time to think how the problem can be approached, then settle on the best option and implement it first. Then, using that initial implementation, I would use the red-green-refactor method.
Why didn't you select pytest instead of unittest...pytest requires less typing.
I really struggle with this TBO... I don't know how to find the time to do the tests first... not the mention get 100% coverage on the code base. TDD is a great methodology and would love to learn TDD first approach... but it is hard, and convincing colleagues that this is the way to go is hard... justifying time spent on this... is hard...
Look at it this way. If you write a piece of code, you have to make sure it works, right? So, you’re going to need to test it to some degree. You don’t have to go for 100% coverage. Just write the few tests that cover the most important cases. That won’t be more work, because you have to do it anyway. The difference now is: write those tests before you write the code that they test and see how it goes. From that point on, you’ll have an automatic *system* that you can extend and improve as you wish, which is a huge win.
@@ArjanCodes I think the part you mentioned where we have to build a product and get into the hands of the customer as soon as possible is probably where we are at... we have to go through refactoring in our entire stack once we meet a critical milestone... I think this will be an important step to introduce TDD homogeneously to the team and enforce some behaviors that promote TDD. I personally would like to learn more from you about the process of using TDD effectively create a continuous delivery culture.
We’re going through the same phase as well in my startup, so I’ll definitely report back and share my experience in upcoming videos!
@@ArjanCodes That would be very enlightening... thank you 😊
thank you!
Thank you so much!
Great video Arjan! Didn’t know about TDD before this video, might start to implement it at work.
Glad it was helpful!
"TDD IS THA SHIZNIT" indeed. :)
Why unitest instead of pytest?
No particular reason, except that it does the job and Python ships with it. And that makes my examples more accessible. I will probably also look into Pytest in the future though.
It would be more convincing if the demonstrations of using unit testing did something more complicated than adding two number together or similar. All the examples I have seen are so simple writing a unit test seems overkill - and time wasted. When I try to use them in my applications, testing anything significant requires almost another application to set up a testing environment, for example complex workflows using large databases. I prefer to set up test data then calculate expected outcome and test to destruction (i.e. try to break the code)
The problem is that using something more complicated detracts from the main topic of the video, which is unit testing, so I don't want to spend more time than needed explaining the example code. The code is complex enough to require several different cases and simple enough to still be easy to follow for everyone. The goal of the video is to explain how test-driven development works, in my opinion the example is fine for that purpose.
@@ArjanCodes it might be a good video in the future. This video is the INTRODUCTION video. But going to the next level if you do a code kata from start to finish using TDD.
Might I suggest the Roman Numerals kata, that is a really good one for TDD. Doing this in pytest instead of unittest will be simpler and less code as well
Keep up the amazing work! Love the vids!
Thanks for the suggestion, will look into it!
did anyone else notice the “TDD IS THA SHIZNIT”
Maybe it would have been better if you haven't any code written at start and if you wrote the tests one by one and not all of them at the start. Anyway, great video!
Ideally, that would have been best. I had that in the first version when I recorded the video, but the test writing part became too long with regards to the rest, so I decided to do it this way instead.
8:12 umm, should an employee know all of these "employer" things? I think this violates high cohesion - an employee wouldn't be told how much an office costs or etc. Also, this looks like a code smell called data clumping, where the prefixes of the instance field names are all the same, and should therefore be extracted into it's own Employer class. This class would also have the calculate payout method, rather than having it in the Employee class
Who says that this is stuff that the employee knows? I mentioned in the video that this could be part of an HR system, so an employee may not even have access, only HR people. You could separate out the costs into a different datastructure. I wanted to keep the example not too complicated, because that’s not the focus of this video, so I decided to use a single class here. Creating these examples is always a trade-off.
how about decimal ? 4:30 OH WAIT, does python even have decimal ? i just switched back from c# 😂
Not sure if I'd agree that its "tha shiznit" but yeah.
:)
Testing manually is boring and takes forever. Let's automate it. -Kent Beck, probably.