Im glad tdd and ddd are still being practiced. I was worried there for a few years that it would be forgotten for newer shiny toys. It is solid fundamental knowledge IMHO. Cheers
Currently reading a book titled Implementing Domain-Driven Design by Vaughn Vernon and this talk has just brought some of the concepts I have read in the book so far to life. Thank you for this wealth of knowledge shared
Great video. Not only the how of starting with TDD in a conceivable situation, but also the why of not only TDD, and the next step, but also the size of the steps.
I liked TDD part of this talk but I do not know if it's only me but I have some doubts about the way how DDD was presented here. Some of the presented building blocks and patterns are quite odd and do not follow clean code rules IMO. Or maybe it was just simplified for sake of live presentation 🤔
Yes, it's true that the presentation doesn't follow Clean Code patterns. This is deliberate, for 2 reasons: 1. The presentation is intended to show how even if you start out thinking you are just building a simple CRUD app, you can use tdd-supported refactoring to introduce some DDD principles (like ubiquitous language and intention revealing names) to make things easier as the requirements get more complex. It is definitely NOT how I would build a solution if I intended to use DDD from the start. 2. Clean Code != DDD - you can use clean code without doing DDD and visa versa. Clean Code patterns are one particular way of structuring your source code, DDD simply requires that your domain logic is encapsulated which the original Blue Book defines as a Layered Architecture. Since then most folks have adopted Clean, or Ports and Adaptor (there are some subtle differences there, too) or even Vertical Slice which is the main pattern used here. You can do DDD with any of them.
Nice stuff. Learnt first time breaking things makes more sense rather than doing in one go, as you rightly expressed about balance between testing vs implementation. I have one question,. how do you see REST (Resource) + DTO vs Domain... how they go or should go from your view ?
Thanks! Great question - I think a DTO to define the payload of a REST resource request or response is a very useful pattern for at least 2 scenarios: 1. You want to have some private data on the domain model that you don't want exposed via the API 2. You want to ensure a consistent contract on the API but have the freedom to change your domain model. In this demo, it starts with just sending the domain model as the API response. However, the tests include their own copy of the API payload - so effectively, a DTO specifying the API Consumer expectation. If you modify the domain model such that it breaks the test, then that could be solved at that time by creating a DTO on the application isde and setting up a mapping.from the domain model. To further protect things, you should ensure the tests use 'strict' deserialisation - e.g. unexpected fields cause deserialisation failure.
Comments such as JavaDoc are tied to the class so it will only show in the hover popup when you use that specific class - e.g if you define Student on the Student entity class you'll only see it when using the classname. Contextive is looking for the text string. This means it will show the definition of the word in more places such as other classes that use the word (e.g StudentController), properties and methods (e.g StudentId, RegisterStudent) and even other languages like js or ts if you have a frontend RegisterStudentForm component. For this reason I recommend contextive is just for documenting domain concepts, not implementation details. JavaDoc should be for how to use the class.
Nice presentation. By the way, shouldn't concurrency be handled by use of proper SQL transaction isolation level (I think Repeatable Read is what you need in that case)? And then if transaction commit failed because of the isolation level (meaning that some of the values you've read has been modified concurrently), you just execute your controller method (or a method in the enrollment service if all of the logic is there) once again from scratch.
There are a number of technical mechanisms for "solving" the concurrency issue - transaction isolation level is potentially one, as are a variety of pessimistic and optimistic concurrency controls. They each have their tradeoffs, balancing throughput and waittimes. If one of those mechanisms was used, then as you point out, whichever way the concurrent activity is detected, the operation could be retried in a new transaction which would protect the business rule. In this case, cause one of the students to be told their enrolment was no longer possible. The point of that part of the talk wasn't really the particular mechanism but more to highlight that a deeper understanding of the domain can remove the concurrency risk all together - and lead to a better outcome as students are less likely to experience their enrolments failing and the education provider can service more students overall.
@@ChrisSimonAu Sure thing :) I just wanted to share that locking / versioning doesn't have to be implemented by a programmer (as this implementation will then have to be maintained), instead - features of the components that are already part of the system (database) could be used to achieve the desired behavior.
@@IlyaDenisov Yes, it is possible to use database as your concurrency control mechanism and that is typically fine for small projects and prototypes, but if your application scales, you run into problems. For instance, you start having some long running transactions and because you have put everything inside "@Transactional", you soon enough run into lock wait timeouts and deadlocks. Also, think about what happens if at some point, you want to introduce some caching into your application, so that not every single request hits the database. Now that task has become significantly more difficult because you were relying on your reads and updates being in the same transaction boundary for your application's correctness. I am not saying that your approach is always incorrect, but keep in mind that every design decision has consequences.
@@alexsmart2612 Good point. Yet I think this aspect (concurrency handling for the feature) should evolve incrementally in the same way as other aspects of the app, so that initial implementation that is suitable for the described logic and project state, could be as simple as transaction isolation levels. That kind-of resonates with the idea behind TDD approach highligthed by Chris in the video - engineer will benefit from maintaining a balance by doing only a small reasonable step at a time, instead of trying to build THE ULTIMATE SYSTEM from the start :)
@@IlyaDenisov Once you have a million loc project with infrastructure and business logic concerns spread all over the place, it is an expensive hole to crawl yourself out of. While I completely agree with the general principle of "maintaining a balance", for me optimistic locking, keeping small transaction boundaries etc fall well within that reasonable balance. It is not all that hard to implement and maintain even for junior developers (after a brief training period). That the resulting code is typically much simpler and easier to reason about is an added bonus.
26:22 Shouldn't the calls to get the room and including the course to the catalog be in a CourseService class to better implement the Ports and Adapters architecture? That way you're only passing the room ID string and course name to the CourseService class. Then the Controller which is the Adapter can be easily replaced with something else for example an Adapter class which get's a message from a queue. This new Adapter would then just call the CourseService class passing it the room ID and course ID? The CourseService class would throw and exception if there's a problem and the REST controller would convert that into a HTTP 400 Bad Request return code. Also the response should ideally have a body with the error message.
I liked the demo but I can't help but feel that it's too much on the happy path which does not help show how TDD helps you course correct immediately when something doesn't feel right.
Thanks for the comment - I agree - when I practice TDD in real life, it's much more of a winding route than this demo illustrates, with often more refactoring steps and sometimes backwards undos. However, with only 50 mins, my goal with this talk is to demonstrate 3 key things: 1. the value of the 'many more much smaller steps' approach to TDD 2. how approaches to domain modelling (such as event storming) can help with more valuable and simpler designs 3. how TDD as a general philosophy supports even large scale design changes To that end, each third of the talk is roughly about each point in sequence. And even then it feels like a very 'full' talk! But yes, I 100% agree with you and would love to do more demos that illustrate more 'real world TDD' :)
Before DDD there must have been an assembler code only. At least this conclusion can be drawn from DDD fans. Talking with business stakehilders or making a domain model is not DDD. We used to do it before, you know. besides that, good talk. I could only argue that its more of an integration test than unit test. which is good for small apps. for bigger system i would separate module tests from integration tests.
Yes, the Java team recommends Optional is only used for return types. In return types, it is to make it explicit to the caller that they would need to handle a None case. For method arguments, they discouraged it, for a bunch of pragmatic reasons - if you google "java optional as method argument" there are some great stackoverflow discussions on the topic with for and against arguments. Personally, having familiarity in more functional languages, I'm very comfortable using a Maybe/Option type as an argument - but try to follow the Java recommendations/idiomatic approach in this demo. I actually don't recall using it as a method argument in this demo though - can you remind me where I may have done that?
@@IvanRandomDude Great catch - thanks so much! I think the Java idiomatic thing to do here would be to not pass an Optional in, and instead just pass in a Room and have the controller look like: Course newCourse = roomRepository.findById(courseRequest.getRoomId()) .map(r -> Course.includeInCatalog(courseRequest.getName(), r)) .orElseThrow(() -> new ResponseStatusException(HttpStatus.BAD_REQUEST)); Keeping the functional/monadic style separate from the entity method. I'll update my repo for future demonstrations :)
I like the demo, but i have some questions. Why in your domain not appear Enrolment? And why the class Enrolment has studentId and courseId, and not has a instance of Student and a instance of Course? Thanks!!
I'm not sure about the first question - there is an Enrolment entity, but it is in the Enroling vertical slice/feature folder. On the second question, I didn't have time to talk about the tradeoffs of those two approaches, but it's a good question. To start with, I find starting with IDs simpler as you can build up some of the functionality in this incremental style, even before the other entity exists. Later, when it does exist you could refactor to use a reference object instead of an ID - if it makes sense. As to why it might or might not make sense - this is a big topic that relates to transactional consistency boundaries (aka aggregates), lazy vs eager loading and read vs write models. It's hard to summarise in a youtube comment but if you google "ddd reference object vs Id" you will find much discussion about it! In particular google "vaughn Vernon aggregate design rule 3" which should turn up some papers that take an in depth look at this pattern.
If you apply DDD “by the book” then it is not allowed for one aggregate root to keep a strong reference to another aggregate root. However, it is allowed to store the unique identifier of that other aggregate root (weak reference). In the demo of Chris, Student, Course and Enrollment are all separate aggregate roots I believe. It is also said that in a single transaction, your are not allowed to update multiple aggregate roots. If one would store a strong reference to another aggregate root; it would be possible to update multiple aggregate roots which thus violates the principle. If 1 aggregate is interested in changes that occur in another aggregate; you should implement domain events. The other aggregate root should have a domain event listener that reacts on those events in its own transaction. I hope that makes sense :)
"Why Enrolment has studentId and courseId instead of an instance of those classes" Because if you implement with instances your changes you apply are not atomic anymore and concurrent updates will be hard. Imagine someone at the same time changes something about course (e.g. professor updating curriculum) and student at the same moment updates his personal info. When both of them try to save, it will fail! Even though their updates have nothing to do with each other. Keeping them separated and only related by id, allows for smaller changes to reason about. Which is another pro of this solution, because the example is simple right now, but all of the software will get more and more complex, have more business cases. Having them together will make it complex to reason about each of them and we have limited brain capacity
TDD is amazing! It only takes 3x as long to get out some working code! It gets 3x as expensive, stakeholders start complaining, but let's do full TDD anyway!
That has not been my experience but if it has been yours then of course, by all means don't do TDD. However, I'm curious what you mean by "Full TDD". Does the style of TDD I show in this talk count as "Full TDD" to you?
@@ChrisSimonAu No, full TDD also accounts for the complex business cases the business team comes up with. If you're going to pre write a test for every single use case then get all of those done then have changing business requirements, you'll spend your entire existence writing and refactoring tests and good lucky hitting deadlines
@@CheeseStickzZ ok, if "Full TDD" is where you pre-write all the tests (for what its worth i dont know anybody who does that...), what do you call the style of TDD in this talk where you don't pre-write any tests?
@@ChrisSimonAu Not sure I've never written tests before the implementation, I'm just mentioning that even if you do there are still complex use cases/scenarios to test for even afterwards, and in an environment where business requirements change often it gets very cumbersome to rewrite both implementation and tests over and over again. Small dev team, tight deadlines.
@@CheeseStickzZyes, dealing with an environment with changing business requirements is always a challenge - personally, TDD helps me with that because ideally you only change the tests that represent requirements that actually changed. This helps ensure you haven't broken all the other functionality. If you have to change tests just because you're changing the design, then I encourage you to look for ways of writing tests that keep them less coupled to implementation. This talk has an example of this in the last 20mins where the requirements change and you can see that only a small number of tests need adjusting. The style in this talk involves writing one very small test, then one small piece of functionality. I've found this helps me ensure the tests are not coupled too tightly to implementation, and that I don't spend too much time writing tests before delivering value. Hope it helps see different ways of approaching the goal...
I tried TDD the way it's done in the video (a lot less sophisticated of course :P) but my boss told me that it was wrong because I was writing integration tests 🙁.
Im glad tdd and ddd are still being practiced. I was worried there for a few years that it would be forgotten for newer shiny toys. It is solid fundamental knowledge IMHO. Cheers
Chris is an excellent speaker and this talk has been featured in the last issue of Tech Talks Weekly newsletter 🎉 Congrats!
Great talk, this talk is a excellent as a crash course for TDD.
Thank you!
Currently reading a book titled Implementing Domain-Driven Design by Vaughn Vernon and this talk has just brought some of the concepts I have read in the book so far to life. Thank you for this wealth of knowledge shared
Thanks so much!
Great book
Really good and excelent talk. In particular explaining in very simple and easy way how event sourcing is working etc.
Great video. Not only the how of starting with TDD in a conceivable situation, but also the why of not only TDD, and the next step, but also the size of the steps.
Thank you!
where is the demo location?
I can't paste a link but check the qr code at 51:54
I liked TDD part of this talk but I do not know if it's only me but I have some doubts about the way how DDD was presented here. Some of the presented building blocks and patterns are quite odd and do not follow clean code rules IMO. Or maybe it was just simplified for sake of live presentation 🤔
Yes, it's true that the presentation doesn't follow Clean Code patterns. This is deliberate, for 2 reasons:
1. The presentation is intended to show how even if you start out thinking you are just building a simple CRUD app, you can use tdd-supported refactoring to introduce some DDD principles (like ubiquitous language and intention revealing names) to make things easier as the requirements get more complex. It is definitely NOT how I would build a solution if I intended to use DDD from the start.
2. Clean Code != DDD - you can use clean code without doing DDD and visa versa. Clean Code patterns are one particular way of structuring your source code, DDD simply requires that your domain logic is encapsulated which the original Blue Book defines as a Layered Architecture. Since then most folks have adopted Clean, or Ports and Adaptor (there are some subtle differences there, too) or even Vertical Slice which is the main pattern used here. You can do DDD with any of them.
Nice stuff. Learnt first time breaking things makes more sense rather than doing in one go, as you rightly expressed about balance between testing vs implementation. I have one question,. how do you see REST (Resource) + DTO vs Domain... how they go or should go from your view ?
Thanks! Great question - I think a DTO to define the payload of a REST resource request or response is a very useful pattern for at least 2 scenarios:
1. You want to have some private data on the domain model that you don't want exposed via the API
2. You want to ensure a consistent contract on the API but have the freedom to change your domain model.
In this demo, it starts with just sending the domain model as the API response. However, the tests include their own copy of the API payload - so effectively, a DTO specifying the API Consumer expectation. If you modify the domain model such that it breaks the test, then that could be solved at that time by creating a DTO on the application isde and setting up a mapping.from the domain model.
To further protect things, you should ensure the tests use 'strict' deserialisation - e.g. unexpected fields cause deserialisation failure.
Great demo
Thanks!
How contextive is different to just comments or javadoc comments? Looks like the same goal can be achieved here.
Comments such as JavaDoc are tied to the class so it will only show in the hover popup when you use that specific class - e.g if you define Student on the Student entity class you'll only see it when using the classname. Contextive is looking for the text string. This means it will show the definition of the word in more places such as other classes that use the word (e.g StudentController), properties and methods (e.g StudentId, RegisterStudent) and even other languages like js or ts if you have a frontend RegisterStudentForm component.
For this reason I recommend contextive is just for documenting domain concepts, not implementation details. JavaDoc should be for how to use the class.
Nice presentation. By the way, shouldn't concurrency be handled by use of proper SQL transaction isolation level (I think Repeatable Read is what you need in that case)? And then if transaction commit failed because of the isolation level (meaning that some of the values you've read has been modified concurrently), you just execute your controller method (or a method in the enrollment service if all of the logic is there) once again from scratch.
There are a number of technical mechanisms for "solving" the concurrency issue - transaction isolation level is potentially one, as are a variety of pessimistic and optimistic concurrency controls. They each have their tradeoffs, balancing throughput and waittimes. If one of those mechanisms was used, then as you point out, whichever way the concurrent activity is detected, the operation could be retried in a new transaction which would protect the business rule. In this case, cause one of the students to be told their enrolment was no longer possible.
The point of that part of the talk wasn't really the particular mechanism but more to highlight that a deeper understanding of the domain can remove the concurrency risk all together - and lead to a better outcome as students are less likely to experience their enrolments failing and the education provider can service more students overall.
@@ChrisSimonAu Sure thing :) I just wanted to share that locking / versioning doesn't have to be implemented by a programmer (as this implementation will then have to be maintained), instead - features of the components that are already part of the system (database) could be used to achieve the desired behavior.
@@IlyaDenisov Yes, it is possible to use database as your concurrency control mechanism and that is typically fine for small projects and prototypes, but if your application scales, you run into problems.
For instance, you start having some long running transactions and because you have put everything inside "@Transactional", you soon enough run into lock wait timeouts and deadlocks.
Also, think about what happens if at some point, you want to introduce some caching into your application, so that not every single request hits the database. Now that task has become significantly more difficult because you were relying on your reads and updates being in the same transaction boundary for your application's correctness.
I am not saying that your approach is always incorrect, but keep in mind that every design decision has consequences.
@@alexsmart2612 Good point. Yet I think this aspect (concurrency handling for the feature) should evolve incrementally in the same way as other aspects of the app, so that initial implementation that is suitable for the described logic and project state, could be as simple as transaction isolation levels. That kind-of resonates with the idea behind TDD approach highligthed by Chris in the video - engineer will benefit from maintaining a balance by doing only a small reasonable step at a time, instead of trying to build THE ULTIMATE SYSTEM from the start :)
@@IlyaDenisov Once you have a million loc project with infrastructure and business logic concerns spread all over the place, it is an expensive hole to crawl yourself out of.
While I completely agree with the general principle of "maintaining a balance", for me optimistic locking, keeping small transaction boundaries etc fall well within that reasonable balance. It is not all that hard to implement and maintain even for junior developers (after a brief training period). That the resulting code is typically much simpler and easier to reason about is an added bonus.
is he using spring framework ?
26:22 Shouldn't the calls to get the room and including the course to the catalog be in a CourseService class to better implement the Ports and Adapters architecture? That way you're only passing the room ID string and course name to the CourseService class. Then the Controller which is the Adapter can be easily replaced with something else for example an Adapter class which get's a message from a queue. This new Adapter would then just call the CourseService class passing it the room ID and course ID? The CourseService class would throw and exception if there's a problem and the REST controller would convert that into a HTTP 400 Bad Request return code. Also the response should ideally have a body with the error message.
Can you send me the link address of the project? thks
I can't paste a link but check the qr code at 51:54
I liked the demo but I can't help but feel that it's too much on the happy path which does not help show how TDD helps you course correct immediately when something doesn't feel right.
Thanks for the comment - I agree - when I practice TDD in real life, it's much more of a winding route than this demo illustrates, with often more refactoring steps and sometimes backwards undos. However, with only 50 mins, my goal with this talk is to demonstrate 3 key things:
1. the value of the 'many more much smaller steps' approach to TDD
2. how approaches to domain modelling (such as event storming) can help with more valuable and simpler designs
3. how TDD as a general philosophy supports even large scale design changes
To that end, each third of the talk is roughly about each point in sequence. And even then it feels like a very 'full' talk!
But yes, I 100% agree with you and would love to do more demos that illustrate more 'real world TDD' :)
Before DDD there must have been an assembler code only. At least this conclusion can be drawn from DDD fans. Talking with business stakehilders or making a domain model is not DDD. We used to do it before, you know.
besides that, good talk. I could only argue that its more of an integration test than unit test. which is good for small apps. for bigger system i would separate module tests from integration tests.
Intellij IDEA really screams at me when I pass optional as method argument.
Yes, the Java team recommends Optional is only used for return types. In return types, it is to make it explicit to the caller that they would need to handle a None case. For method arguments, they discouraged it, for a bunch of pragmatic reasons - if you google "java optional as method argument" there are some great stackoverflow discussions on the topic with for and against arguments.
Personally, having familiarity in more functional languages, I'm very comfortable using a Maybe/Option type as an argument - but try to follow the Java recommendations/idiomatic approach in this demo.
I actually don't recall using it as a method argument in this demo though - can you remind me where I may have done that?
@@ChrisSimonAu 25:03
@@IvanRandomDude Great catch - thanks so much! I think the Java idiomatic thing to do here would be to not pass an Optional in, and instead just pass in a Room and have the controller look like:
Course newCourse = roomRepository.findById(courseRequest.getRoomId())
.map(r -> Course.includeInCatalog(courseRequest.getName(), r))
.orElseThrow(() -> new ResponseStatusException(HttpStatus.BAD_REQUEST));
Keeping the functional/monadic style separate from the entity method.
I'll update my repo for future demonstrations :)
I like the demo, but i have some questions.
Why in your domain not appear Enrolment?
And why the class Enrolment has studentId and courseId, and not has a instance of Student and a instance of Course?
Thanks!!
I'm not sure about the first question - there is an Enrolment entity, but it is in the Enroling vertical slice/feature folder.
On the second question, I didn't have time to talk about the tradeoffs of those two approaches, but it's a good question.
To start with, I find starting with IDs simpler as you can build up some of the functionality in this incremental style, even before the other entity exists. Later, when it does exist you could refactor to use a reference object instead of an ID - if it makes sense.
As to why it might or might not make sense - this is a big topic that relates to transactional consistency boundaries (aka aggregates), lazy vs eager loading and read vs write models. It's hard to summarise in a youtube comment but if you google "ddd reference object vs Id" you will find much discussion about it! In particular google "vaughn Vernon aggregate design rule 3" which should turn up some papers that take an in depth look at this pattern.
If you apply DDD “by the book” then it is not allowed for one aggregate root to keep a strong reference to another aggregate root. However, it is allowed to store the unique identifier of that other aggregate root (weak reference). In the demo of Chris, Student, Course and Enrollment are all separate aggregate roots I believe.
It is also said that in a single transaction, your are not allowed to update multiple aggregate roots. If one would store a strong reference to another aggregate root; it would be possible to update multiple aggregate roots which thus violates the principle.
If 1 aggregate is interested in changes that occur in another aggregate; you should implement domain events. The other aggregate root should have a domain event listener that reacts on those events in its own transaction. I hope that makes sense :)
"Why Enrolment has studentId and courseId instead of an instance of those classes"
Because if you implement with instances your changes you apply are not atomic anymore and concurrent updates will be hard.
Imagine someone at the same time changes something about course (e.g. professor updating curriculum) and student at the same moment updates his personal info.
When both of them try to save, it will fail! Even though their updates have nothing to do with each other.
Keeping them separated and only related by id, allows for smaller changes to reason about.
Which is another pro of this solution, because the example is simple right now, but all of the software will get more and more complex, have more business cases.
Having them together will make it complex to reason about each of them and we have limited brain capacity
Great talk!
Thanks!
This is ATDD, acceptance tests
TDD is amazing! It only takes 3x as long to get out some working code! It gets 3x as expensive, stakeholders start complaining, but let's do full TDD anyway!
That has not been my experience but if it has been yours then of course, by all means don't do TDD. However, I'm curious what you mean by "Full TDD". Does the style of TDD I show in this talk count as "Full TDD" to you?
@@ChrisSimonAu No, full TDD also accounts for the complex business cases the business team comes up with. If you're going to pre write a test for every single use case then get all of those done then have changing business requirements, you'll spend your entire existence writing and refactoring tests and good lucky hitting deadlines
@@CheeseStickzZ ok, if "Full TDD" is where you pre-write all the tests (for what its worth i dont know anybody who does that...), what do you call the style of TDD in this talk where you don't pre-write any tests?
@@ChrisSimonAu Not sure I've never written tests before the implementation, I'm just mentioning that even if you do there are still complex use cases/scenarios to test for even afterwards, and in an environment where business requirements change often it gets very cumbersome to rewrite both implementation and tests over and over again. Small dev team, tight deadlines.
@@CheeseStickzZyes, dealing with an environment with changing business requirements is always a challenge - personally, TDD helps me with that because ideally you only change the tests that represent requirements that actually changed. This helps ensure you haven't broken all the other functionality. If you have to change tests just because you're changing the design, then I encourage you to look for ways of writing tests that keep them less coupled to implementation. This talk has an example of this in the last 20mins where the requirements change and you can see that only a small number of tests need adjusting.
The style in this talk involves writing one very small test, then one small piece of functionality. I've found this helps me ensure the tests are not coupled too tightly to implementation, and that I don't spend too much time writing tests before delivering value.
Hope it helps see different ways of approaching the goal...
I tried TDD the way it's done in the video (a lot less sophisticated of course :P) but my boss told me that it was wrong because I was writing integration tests 🙁.
You can see at 46:30 where I switch to a 'class-based' test around a more complex piece of domain logic that warrants it.
Great ✨
Thanks!
Miller Brian Johnson Melissa Wilson Michelle
Miller Sharon Jackson Margaret Taylor Scott
Young Betty Johnson Karen Garcia Christopher
It's sad that he started from the UI