Focus on the right parts of the problem when you are creating a new microservices system. Here's my FREE 'how-to-guide-book' offering advice on the Microservices basics to help you get started ➡www.subscribepage.com/microservices-guide
I've been using "chained unit tests" with some success in this area. As always: The approach doesn't come with a brain, bring your own 🙂 The general idea is that you want to test a sequence of operations A->...->X. You can set up all the steps OR look inside of the black box and see that what happens is actually A->B->...X. So the next step is to record several requests and responses that A and B exchange. For example, when B is using A to do some work, then A has a public API. There are unit tests for A. Now the trick is to publish the inputs and outputs as code, ideally as a build artifact. The unit tests of B can then add this as a dependency and run their own tests to make sure they can create the requests that A understands (= inputs for the tests in A) and that it can process the responses of B (= outputs of the tests in A). Since your software should be deterministic, this will allow you to run tests in A and B independently and still know for sure that connecting B to A will work. When someone changes A, the pipeline will run B's tests with the new version of the "A's-tests" dependency. That will give them an early warning when they make breaking changes. They can then either fix it on their side (make the change non-breaking) or reach out to B to fix it (just like your approach). The nice thing here is that this can be used to break up more complex microservices and monoliths into smaller parts that can then be tested by using the "expected output" of tests for step N as inputs for step N+1. This gives you great flexibility where in the chain of steps you want to cut, how often you want to cut and when you want to stop. If everything is stable, do less testing. When bugs start to show up, test more. The next nice thing is that the saved test data can usually be used to create more test cases: Take the default and change one field to get a new corner case. This makes it easy to come up with lots of tests, and it's easy to see how each test deviates from the norm. This will allow you to quickly cover fickle components with lots of tests while you can just do the bare minimum for reliable parts of your system.
I've found Pact to be useful for services owned by a single team or closely related teams, but it becomes difficult if the teams are not working very closely together because Pact will break the provider's build as soon as the consumer asks for a change. Pact even suggested avoiding that use case. Awesome find on Specmatic. I've not come across that one yet. Adding it to the toolkit.
You can use Pact's bi-directional approach instead of a consumer-driven approach. You verify the consumer contract against a specification of the provider's API - e.g. Twitter's OpenAPI spec. The provider doesn't even know you are verifying your integration. The downside is that the provider can't adapt for the consumer. But the scope is exactly for providers that don't care.
If you enable the Pending Pacts feature on the provider then it won't fail if new consumer contracts get added. After the first time the provider implements the new contract and tests pass then the pact is no longer pending and so regressions will then fail the provider build going forward.
We have in-house tooling that does these things. Our services publlish their Swagger/OpenAPI definitions and can import those of other services to generate client code. However the team that built the tools is no longer here and after several years it has become very difficult to modify central pieces. So it's useful to get some recommendations for third-party tools that we wouldn't have to maintain.
We faced the same issue in my current project, I thought a lot about the problem and came to an idea, but I do not have enough influence to push it through. I believe we should treat microservices as we treat modules or classes in our code. When we draw this analogy and have some experience with the SOLID principle, the solution almost come naturally. We have to apply the dependency inversion principle. DIP says depend on high level abstraction instead of low level details. It is easy in case of classes, just put an interface between the two classes and you are done, but in case of a whole service it is a bit trickier. In my opinion a microservice should deliver two artifact, the service itself, and a client library. The client library will be your interface above your HTTP API. So instead of creating a client which uses http calls to access the other service, you can use method calls on the library, which will do http calls in the background, but the main difference comes from who develops and tests those calls. The microservice's teams is the best for this. They have the knowledge and the code. With a client library like this you can change the http API of the service, and your users do not have to change anything. It is more work and requires a good deprecation strategy, but I believe that this approach can eliminate many problems.
This channel is so coincidental with my day on another occasion. We were talking about Pact earlier today because I wasn't sure what the difference between a component test and a contract test was. You can just as easily contract test your microservices with JUnit and MockWebServer.
Not JUST as easily. Having something that can auto-test based on the OpenaAPI spec is pretty useful. I've used other similar tools to avoid hand-coding contract tests. You're right that you don't need a tool like this to do contract tests, it's just a toil reducer.
I am working in a monolith project now with E2E tests. I am having an issue where the tests fail, because the mail SaaS product API has some kind of rate limiting. So now, the tests fail, even though there is nothing wrong with our code. Can Contrast Testing solve this? Sounds promising...
Great video - cheers! I built something similar years ago for a particular project and used openapi, it was very specific to that project though. Looking into specmatic now :)
is it usefull to use contract testing between frontend and backend apps when both is developed by single team? theoretically both components are tested in e2e acceptance tests, but with contract tests you can cover more edge cases with less work writing traditional unit/intergration tests.
saw Specmatic last year, haven't got a chance to try it yet. Mocks is quite intriguing, as they seems to unblock FE, before BE could actually implement an endpoint.
I found PACT to be hard to work with and the documentation verbose and unhelpful , does anyone have links to articles explaining it implementation? It also still doesn't seem to be widely adopted (?), are there other tools or approaches now
I'm working on big banking project which is using the distributed monolith approach. While we do not do contract testing, we do have shared repository, which is source of truth for API specifications and which automatically updates the API definitions in all related projects. Eg. if I change API for service A, and service B is calling A, then both A and B gets updated versions of the API. On one side, it is nice to have single source of truth and having all clients updated. But the development flow is not nice as you need to make changes in multiple repositories. And you cannot change the API catalog repo without first actually implementing the changes in the provider, as the catalog CI does no handle verification of the API definitions. I would love to see this explored on your chanel.
One of those contract-based approaches sounds like speculative execution in microprocessors. I wonder if the metaphor of pipelines (which exists in both) is playing a role here.
Yes it is. Speculative execution, is also at the heart of the deployment pipeline in CD. These things crop up all over the place, because they are about "information" rather than just technology. In CD we divide pipelines into different phases, the fast "Commit Phase" reports very quickly to support development, and devs move on to new things when it passes. The next phase, "Acceptance" evaluates for releasability and takes longer. The bet is that if everything is good after Commit, most likely all the tests in "Acceptance" will pass - so we are speculatively executing on new features, in parallel with "Acceptance" being confirmed, on the assumption that it will pass. 😉 Similarly, in my example, Team B is speculatively executing on a new version of the contract.
What about microfrontends? - How can we do the same for sharing "state"? - How can we do the same for sharing events? - How can we do the same for sharing component injection points?
@@ContinuousDelivery Hi, thanks a lot for your time. Because you were talking about endpoints which it is a clear way to communicate microservices, I was wondering how it might work in other kinds of artifacts. For example, microfrontends are also each one developed by a different team, but instead of endpoints, they communicate though other mechanisms. So, I was wondering if there is something like in microservices for it. What you have said about types kept me interested, because through things like typescript there are a few things that can be done, but sadly it does not cover the intricacies of giving a callback and when the callback will be executed. But I guess that I will deep in it further. Thanks!!
@@DavidRodenas I think that these are still a useful tool in that context, the trick is to design the interactions and represent them as well defined 'contracts' for exchanging information. It is when these interactions are treated as ad-hoc that this form of testing doesn't work so well, but then the reason for that is because the SW design isn't good enough. Abstracting the conversation is always a good idea, that that abstraction as a contract and away you go!
This type of tool and interservice communication usually doesn't cause disagreement at work about how team should work together. What does cause problems is when a UI is involved; then people talk about having separate UX teams, developers who don't understand the business and never talk to actual users, etc. Has there been a UI specific video on this channel?
As a frontend developer, I've not seen any. I generally find that the concepts translate pretty well, but unfortunately that can make it difficult to share sometimes, since I feel like if I say "Hey, there's this great video, but substitute concept A for concept B", no one will watch it.
Strangely, some FE developers seem to think there's a difference between how we should treat UI and services. It baffles me. The only difference is the API. Services provide a machine-readable API. UI's provide a wetware-readable API. Otherwise, we should treat them the same.
Agreed. A micro frontend IS a micro service, full stop. Unfortunately, to test a frontend requires interacting with a ui in a machine readable way, which I would argue includes checking styling as well. Unfortunately, our automated testing team (I know, I know...) is still selecting things with id and xpath... I think I've convinced them to look at alternatives, though.
Customers would see it as a single unit hence it should be treated as a single unit while doing functional testing by the QA team even though it might be composed of multiple services.
ONly if you wish to invest inordinate amounts of money developing a test you cannot trust. It doesn't matter how the customer sees it. You don't assemble a car before testing the components, and you don't test an assembled car in a way that will tell you if every component is working correctly. QA teams are a quality anti-pattern. See "Accelerate" by Forsgre, Humble, and Kim.
Maybe I missunderstand, but is it really enough to just verify that the API:s are compatible? It sounds like ”if it compiles then it must work”. Don’t you still need to write gherkin style tests based on commands and expected events to really verify that the right things happen? Do you have to encode all the flows and expected events in the openapi spec?
You're allowed to write gherkin *within* either one of the services. If you're trying to achieve independent deployments, you're not allowed to write gherkin using both services without mocking one. If you did, then you would no longer have independent deployments
This isn't about testing the behavior of an implementation of the contract specification. It's about testing if different versions of the specification are compatible with one another. If they are not, you will break a dependent service. If they are, you won't. Unless you also change the underlying behavior as well as the contract specification.
This approach doesn't truly verify the actual compatibility between services. While it may detect backward compatibility, it doesn't necessarily imply a breaking change. An API can consist of multiple fields, but consumers may only utilize a portion of them. Tools like Pact or Spring Cloud Contract (CDC) also help identify this issue, allowing you to safely remove or rename some fields in your API. You can easily see which consumers are interested in which data on your API and give better decisions on doing changes.
In addition, Hyrum's law says that "given a sufficient number of users, it does not matter what you promised in a contract: all aspect of behaviour will be depended on somewhere by someone". Blind contract testing does not help test behaviour of the system or it's data. Let's say system A depends on system B, but system B validates system A's data.. let's say a field has to be n characters long. This is normally performed by a validation layer, and if this layer changes you break the contract and system A will break because of it. You could make this part of the contract, but most tools like open API do not make that easy to do.. I find it best if some aspect of data (given some predetermined scenario) is processed by the contract testing, so that we can be sure that dependent systems are able to cope with (at least) the data that is provided. In our scenario, system B can run "real" requests (provided by some determined scenario) through some part of its code that invokes validation, and that helps prevent this kind of problem. This is just an example, but automated contract testing, in the way the tool in the video seems to work, doesn't seem to work well with this. At least with something like pact or even just a unit test that ingests some data from another systems test outputs, I need to write a physical test, and can run the data against the important code modules in my dependent systems
@@mattcorby This isn't about testing "behaviour of the system" that is the job of other tests, contract testing is there to de-couple development. If the contract is met, then we don't need to test the behaviour of the entire system, we can test the behaviour of the pieces. This doesn't necessarily mean the whole system does useful things, but it means that all of the pieces of the system can talk, and each of them can be verified to do something useful in its context, via acceptance testing.
It falsifies the contact between services - and that's the best that you can do. There is no way to "prove" that services work together in all circumstances. Falsification is the stronger statement. I can assert that "All swans are white" and I can never prove that, but I can falsify it the first time I see a black swan. I can assert that my code is perfect, but however many tests I have, I can NEVER prove it, but one failing test, and I know that my code is not good enough. Contract tests assert where there are failures!
@@ContinuousDeliveryhey thanks for your reply. Very much appreciated! I don't disagree with what you said about the behaviour at all, or falsification. Perhaps my opening statement was too confusing, and some of my words were unclear.. I kinda rattled out some thoughts after watching the video without analysing too much. Most of what I was talking about was not the actual behaviour of the component systems, more that _data_ generated by those systems is also part of that contract. The size of fields, the implicit validity of the data I'm sending etc If system A is sending username and token data, and system B has a validation that says "tokens less than length 7 are invalid", system A will not work with system B if it (starts?) generates tokens that are always 6 digits long. An openapi schema alone doesn't test for this. A fake generated by system B that A tests against might be a way to ensure compatibility, as long as it includes the validation step, but imho this sort of test should really be part of the contract. And then you're locked into using updates of those fakes, just to remain in lock step with its behaviour. Maybe I missed something about the framework in the video. Perhaps it does provide a mechanism for this? I'm willing to be corrected :)
Not as far as my reading of "Azure Schema Registry" goes, it doesn't mention testing, which is the whole point of Schematic, the registry is just part of the approach - or am I missing something in "Schema Registry"?
@@ContinuousDelivery Admittedly, I oversimplified Specmatic. I should clarify that while Schema Registry's storage, retrieval, and versioning can be used to create similar in-house tools, Specmatic seems to offer a more advanced and intriguing application of these concepts.
So i started watching this. And I had to pause and order this exact t-shirt. So my complaint is that Dave has so many cool t-shirts that its getting in the way of my learning. Oh well I did manage to resume. So all is good.
If you had an off-the-shelf wireless game controller for your R2-D40, you could easily submerge down to the Titanic without much testing but making lots of money immediately.
Focus on the right parts of the problem when you are creating a new microservices system. Here's my FREE 'how-to-guide-book' offering advice on the Microservices basics to help you get started ➡www.subscribepage.com/microservices-guide
I've been using "chained unit tests" with some success in this area. As always: The approach doesn't come with a brain, bring your own 🙂
The general idea is that you want to test a sequence of operations A->...->X. You can set up all the steps OR look inside of the black box and see that what happens is actually A->B->...X. So the next step is to record several requests and responses that A and B exchange. For example, when B is using A to do some work, then A has a public API. There are unit tests for A. Now the trick is to publish the inputs and outputs as code, ideally as a build artifact. The unit tests of B can then add this as a dependency and run their own tests to make sure they can create the requests that A understands (= inputs for the tests in A) and that it can process the responses of B (= outputs of the tests in A). Since your software should be deterministic, this will allow you to run tests in A and B independently and still know for sure that connecting B to A will work.
When someone changes A, the pipeline will run B's tests with the new version of the "A's-tests" dependency. That will give them an early warning when they make breaking changes. They can then either fix it on their side (make the change non-breaking) or reach out to B to fix it (just like your approach).
The nice thing here is that this can be used to break up more complex microservices and monoliths into smaller parts that can then be tested by using the "expected output" of tests for step N as inputs for step N+1. This gives you great flexibility where in the chain of steps you want to cut, how often you want to cut and when you want to stop. If everything is stable, do less testing. When bugs start to show up, test more.
The next nice thing is that the saved test data can usually be used to create more test cases: Take the default and change one field to get a new corner case. This makes it easy to come up with lots of tests, and it's easy to see how each test deviates from the norm. This will allow you to quickly cover fickle components with lots of tests while you can just do the bare minimum for reliable parts of your system.
I've found Pact to be useful for services owned by a single team or closely related teams, but it becomes difficult if the teams are not working very closely together because Pact will break the provider's build as soon as the consumer asks for a change. Pact even suggested avoiding that use case. Awesome find on Specmatic. I've not come across that one yet. Adding it to the toolkit.
You can use Pact's bi-directional approach instead of a consumer-driven approach.
You verify the consumer contract against a specification of the provider's API - e.g. Twitter's OpenAPI spec.
The provider doesn't even know you are verifying your integration.
The downside is that the provider can't adapt for the consumer. But the scope is exactly for providers that don't care.
If you enable the Pending Pacts feature on the provider then it won't fail if new consumer contracts get added.
After the first time the provider implements the new contract and tests pass then the pact is no longer pending and so regressions will then fail the provider build going forward.
We have in-house tooling that does these things. Our services publlish their Swagger/OpenAPI definitions and can import those of other services to generate client code. However the team that built the tools is no longer here and after several years it has become very difficult to modify central pieces. So it's useful to get some recommendations for third-party tools that we wouldn't have to maintain.
We faced the same issue in my current project, I thought a lot about the problem and came to an idea, but I do not have enough influence to push it through.
I believe we should treat microservices as we treat modules or classes in our code. When we draw this analogy and have some experience with the SOLID principle, the solution almost come naturally. We have to apply the dependency inversion principle. DIP says depend on high level abstraction instead of low level details. It is easy in case of classes, just put an interface between the two classes and you are done, but in case of a whole service it is a bit trickier.
In my opinion a microservice should deliver two artifact, the service itself, and a client library. The client library will be your interface above your HTTP API. So instead of creating a client which uses http calls to access the other service, you can use method calls on the library, which will do http calls in the background, but the main difference comes from who develops and tests those calls. The microservice's teams is the best for this. They have the knowledge and the code. With a client library like this you can change the http API of the service, and your users do not have to change anything. It is more work and requires a good deprecation strategy, but I believe that this approach can eliminate many problems.
This channel is so coincidental with my day on another occasion. We were talking about Pact earlier today because I wasn't sure what the difference between a component test and a contract test was. You can just as easily contract test your microservices with JUnit and MockWebServer.
Not JUST as easily. Having something that can auto-test based on the OpenaAPI spec is pretty useful. I've used other similar tools to avoid hand-coding contract tests. You're right that you don't need a tool like this to do contract tests, it's just a toil reducer.
I am working in a monolith project now with E2E tests. I am having an issue where the tests fail, because the mail SaaS product API has some kind of rate limiting. So now, the tests fail, even though there is nothing wrong with our code.
Can Contrast Testing solve this? Sounds promising...
Great video - cheers! I built something similar years ago for a particular project and used openapi, it was very specific to that project though. Looking into specmatic now :)
is it usefull to use contract testing between frontend and backend apps when both is developed by single team? theoretically both components are tested in e2e acceptance tests, but with contract tests you can cover more edge cases with less work writing traditional unit/intergration tests.
saw Specmatic last year, haven't got a chance to try it yet. Mocks is quite intriguing, as they seems to unblock FE, before BE could actually implement an endpoint.
I haven't heard a singular individual use the phrase "interface definition language" since the early 00s. There's a throwback to CORBA.
...nor me, but it is quite descriptive 😉
I found PACT to be hard to work with and the documentation verbose and unhelpful , does anyone have links to articles explaining it implementation? It also still doesn't seem to be widely adopted (?), are there other tools or approaches now
Well the video where you have asked this question about alternatives to PACT is about an alternative to PACT, I guess you didn't watch this one? 😉🤣😎
I'm working on big banking project which is using the distributed monolith approach. While we do not do contract testing, we do have shared repository, which is source of truth for API specifications and which automatically updates the API definitions in all related projects. Eg. if I change API for service A, and service B is calling A, then both A and B gets updated versions of the API.
On one side, it is nice to have single source of truth and having all clients updated. But the development flow is not nice as you need to make changes in multiple repositories. And you cannot change the API catalog repo without first actually implementing the changes in the provider, as the catalog CI does no handle verification of the API definitions.
I would love to see this explored on your chanel.
Simple, versioning.
And activate the version only when all the parts are up to date
@@RaMz00z Ties every change to the slowest change. :(
It's reassuring to see someone promote a project while maintaining critical thinking and raising suspicions. It seems more useful and honest.
What about messages and events?
- Do microservices do not listen to domain events of other microservices? Or I am wrong?
Don't know it Specmatic supports it but, AsyncAPI could be used.
@@Rope257 The homepage says they support AsyncAPI (and even WSDL) along with OpenAPI - not finding any documentation on AsyncAPI support yet, though.
Yes, and those events are the contracts that this form of testing can validate.
@@JeffryGonzalezHt Oh cool! I didn't have time to look it up when I typed the suggestion but good to know they support that spec :D
Nice video! Have you tried spectmatic now?
I'm choosing between this or self-hosted pact for our CI/CD pipelines.
I'd be interested if you took any interest in Microcks. It seems to approach the problem in the same way as Specmatic.
One of those contract-based approaches sounds like speculative execution in microprocessors. I wonder if the metaphor of pipelines (which exists in both) is playing a role here.
Yes it is. Speculative execution, is also at the heart of the deployment pipeline in CD. These things crop up all over the place, because they are about "information" rather than just technology.
In CD we divide pipelines into different phases, the fast "Commit Phase" reports very quickly to support development, and devs move on to new things when it passes. The next phase, "Acceptance" evaluates for releasability and takes longer. The bet is that if everything is good after Commit, most likely all the tests in "Acceptance" will pass - so we are speculatively executing on new features, in parallel with "Acceptance" being confirmed, on the assumption that it will pass. 😉
Similarly, in my example, Team B is speculatively executing on a new version of the contract.
What about microfrontends?
- How can we do the same for sharing "state"?
- How can we do the same for sharing events?
- How can we do the same for sharing component injection points?
Not sure what you are asking, this is contract testing, so it can validate the contract between your "micro-front-ends" and other services?
@@ContinuousDelivery Hi, thanks a lot for your time. Because you were talking about endpoints which it is a clear way to communicate microservices, I was wondering how it might work in other kinds of artifacts. For example, microfrontends are also each one developed by a different team, but instead of endpoints, they communicate though other mechanisms. So, I was wondering if there is something like in microservices for it.
What you have said about types kept me interested, because through things like typescript there are a few things that can be done, but sadly it does not cover the intricacies of giving a callback and when the callback will be executed. But I guess that I will deep in it further.
Thanks!!
@@DavidRodenas I think that these are still a useful tool in that context, the trick is to design the interactions and represent them as well defined 'contracts' for exchanging information. It is when these interactions are treated as ad-hoc that this form of testing doesn't work so well, but then the reason for that is because the SW design isn't good enough. Abstracting the conversation is always a good idea, that that abstraction as a contract and away you go!
This type of tool and interservice communication usually doesn't cause disagreement at work about how team should work together. What does cause problems is when a UI is involved; then people talk about having separate UX teams, developers who don't understand the business and never talk to actual users, etc. Has there been a UI specific video on this channel?
As a frontend developer, I've not seen any. I generally find that the concepts translate pretty well, but unfortunately that can make it difficult to share sometimes, since I feel like if I say "Hey, there's this great video, but substitute concept A for concept B", no one will watch it.
Strangely, some FE developers seem to think there's a difference between how we should treat UI and services. It baffles me. The only difference is the API. Services provide a machine-readable API. UI's provide a wetware-readable API. Otherwise, we should treat them the same.
Agreed. A micro frontend IS a micro service, full stop. Unfortunately, to test a frontend requires interacting with a ui in a machine readable way, which I would argue includes checking styling as well. Unfortunately, our automated testing team (I know, I know...) is still selecting things with id and xpath... I think I've convinced them to look at alternatives, though.
@@peterolson7351 Send them some Kent C. Dodds posts. :D
Customers would see it as a single unit hence it should be treated as a single unit while doing functional testing by the QA team even though it might be composed of multiple services.
ONly if you wish to invest inordinate amounts of money developing a test you cannot trust. It doesn't matter how the customer sees it. You don't assemble a car before testing the components, and you don't test an assembled car in a way that will tell you if every component is working correctly. QA teams are a quality anti-pattern. See "Accelerate" by Forsgre, Humble, and Kim.
Maybe I missunderstand, but is it really enough to just verify that the API:s are compatible? It sounds like ”if it compiles then it must work”.
Don’t you still need to write gherkin style tests based on commands and expected events to really verify that the right things happen? Do you have to encode all the flows and expected events in the openapi spec?
Those should be implemented by the provider with whatever their xUnit framework is as separate behavior tests.
You're allowed to write gherkin *within* either one of the services. If you're trying to achieve independent deployments, you're not allowed to write gherkin using both services without mocking one. If you did, then you would no longer have independent deployments
This isn't about testing the behavior of an implementation of the contract specification.
It's about testing if different versions of the specification are compatible with one another.
If they are not, you will break a dependent service. If they are, you won't.
Unless you also change the underlying behavior as well as the contract specification.
Thanks for the clarifications.
This approach doesn't truly verify the actual compatibility between services. While it may detect backward compatibility, it doesn't necessarily imply a breaking change. An API can consist of multiple fields, but consumers may only utilize a portion of them. Tools like Pact or Spring Cloud Contract (CDC) also help identify this issue, allowing you to safely remove or rename some fields in your API. You can easily see which consumers are interested in which data on your API and give better decisions on doing changes.
In addition, Hyrum's law says that "given a sufficient number of users, it does not matter what you promised in a contract: all aspect of behaviour will be depended on somewhere by someone".
Blind contract testing does not help test behaviour of the system or it's data.
Let's say system A depends on system B, but system B validates system A's data.. let's say a field has to be n characters long. This is normally performed by a validation layer, and if this layer changes you break the contract and system A will break because of it. You could make this part of the contract, but most tools like open API do not make that easy to do..
I find it best if some aspect of data (given some predetermined scenario) is processed by the contract testing, so that we can be sure that dependent systems are able to cope with (at least) the data that is provided.
In our scenario, system B can run "real" requests (provided by some determined scenario) through some part of its code that invokes validation, and that helps prevent this kind of problem.
This is just an example, but automated contract testing, in the way the tool in the video seems to work, doesn't seem to work well with this. At least with something like pact or even just a unit test that ingests some data from another systems test outputs, I need to write a physical test, and can run the data against the important code modules in my dependent systems
@@mattcorby This isn't about testing "behaviour of the system" that is the job of other tests, contract testing is there to de-couple development. If the contract is met, then we don't need to test the behaviour of the entire system, we can test the behaviour of the pieces. This doesn't necessarily mean the whole system does useful things, but it means that all of the pieces of the system can talk, and each of them can be verified to do something useful in its context, via acceptance testing.
It falsifies the contact between services - and that's the best that you can do. There is no way to "prove" that services work together in all circumstances. Falsification is the stronger statement. I can assert that "All swans are white" and I can never prove that, but I can falsify it the first time I see a black swan.
I can assert that my code is perfect, but however many tests I have, I can NEVER prove it, but one failing test, and I know that my code is not good enough.
Contract tests assert where there are failures!
@@ContinuousDeliveryhey thanks for your reply. Very much appreciated!
I don't disagree with what you said about the behaviour at all, or falsification. Perhaps my opening statement was too confusing, and some of my words were unclear.. I kinda rattled out some thoughts after watching the video without analysing too much.
Most of what I was talking about was not the actual behaviour of the component systems, more that _data_ generated by those systems is also part of that contract. The size of fields, the implicit validity of the data I'm sending etc
If system A is sending username and token data, and system B has a validation that says "tokens less than length 7 are invalid", system A will not work with system B if it (starts?) generates tokens that are always 6 digits long. An openapi schema alone doesn't test for this.
A fake generated by system B that A tests against might be a way to ensure compatibility, as long as it includes the validation step, but imho this sort of test should really be part of the contract. And then you're locked into using updates of those fakes, just to remain in lock step with its behaviour.
Maybe I missed something about the framework in the video. Perhaps it does provide a mechanism for this? I'm willing to be corrected :)
Specmatic sounds exactly like Azure SchemaRegistry
EDIT: Not exactly. Only the schema registry and versioning aspect
Not as far as my reading of "Azure Schema Registry" goes, it doesn't mention testing, which is the whole point of Schematic, the registry is just part of the approach - or am I missing something in "Schema Registry"?
@@ContinuousDelivery Admittedly, I oversimplified Specmatic. I should clarify that while Schema Registry's storage, retrieval, and versioning can be used to create similar in-house tools, Specmatic seems to offer a more advanced and intriguing application of these concepts.
So i started watching this. And I had to pause and order this exact t-shirt.
So my complaint is that Dave has so many cool t-shirts that its getting in the way of my learning.
Oh well I did manage to resume. So all is good.
I am sorry for interrupting your viewing 🤣🤣
are you telling me that you not follow Kent Beck or Martin Fowler on Twitter? :c
Dave sounds a little different. Was he sick wen he recorded this?
A little, but feeling better now, thanks.
Like for a gorgeous t-shirt!! :)
If you had an off-the-shelf wireless game controller for your R2-D40, you could easily submerge down to the Titanic without much testing but making lots of money immediately.
This is too hard.
As someone currently working on a team with many of these issues, it sounds too hard NOT to do it.