ATTENTION! If you are running a database which works on demand for cost optimization, keep in mind that if you use this method to check its heath, the database will be pinged in every few seconds and will be up all the time. Been there... :)
Health checks are important, I have to give a warning when designing them though. K8s/Nomad/ECS etc. generally take the view that "unhealthy" services need to be restarted/reallocated. If your services make other services and database dependencies part of their health checks, when those dependencies have issues or network outages, you can cause all your services to crash and get yourself into a thundering herd of service restarts and failures across your entire system. They can also create a startup sequence for your services, generally a bad thing when working with distributed systems. ie. Y service cannot start and pass its health check until X service is running. Keep this in mind when designing your health checks.
@@johncerpa3782 As with everything, configure k8s to give it some time before shutting it off, and think about your whole service infrastructure. k8s gives you two options, a "alive" health check to see if your application is running, and a ready health check to se if you are ready to receive traffic, with lots of options for each of them :)
Sure with this iirc you can create different endpoints. One would be a container-level service check for k8 and one could be an application service level for the load balancer/target group, etc.
I've been a software developer for almost 5 years. Worked on projects for ~5 different companies. Never knew this existed. Now I might need to bring this up at work... I can't imagine how much time this could save us. Great stuff Nick. Thanks!
I was planning on creating 2 methods in my APIs. 1, "Ping" shows the API is "alive". 2. "HealthCheck" digs down into all of the "dependencies". I'm thanking God I came across your videos first. No matter what you want to build, there is probably a library or API that already does it. Thanks for another great video.
You know not only does Nick make my c# so much better , his way of developing is cross transferable to other languages. As soon as I've mastered things like this, I then go looking for the same type of things in rust, go & erlang. Thank for being brilliant mate you're a star
I knew health checks were a thing, but didn't know how comprehensive and easy they are to setup. So thank you! I was curious to see more about the authorization options on the _health endpoint. You said it could be "network based" but you didn't show an example of that.
Nice and clean Nick, Thanks for this amazing content..buuuut the applications HealthCheck shouldn't check components external to itself, because if the DB fails, it doesn't make much sense to redeploy all the containers that use that DB, doing this won't bring up the DB because the The problem is not in the application container but in the DB. What is suggested is to establish a HC for the DB, although for infrastructure issues a monitoring or alarm system works better, because it is not as simple as just creating a new Postgres or a new Rabbit, etc.
You don't necessarily have to restart the service. Just raise an alert and have someone handle it manually or, as shown with the health check response writer, you can get a detailed overview of what's borked so you can automate the actions.
Thanks a lot! But I still have a question - how to implement health checks in console applications? I have workers in my project and I need to be sure that they are alive
I’ve been using the mentioned packages as well. They even have a UI package to enable an visual overview of health checks. I run this UI in my k8s cluster to get a nice detailed overview of the micro services health.
Thanks Nick! A lot of people explain how to use these, but so far you are the only one I have seen that has mentioned how to secure it. On the Microsoft microservices shop on containers they use the UI as well. It is pretty cool.
Nice introduction to health check, super useful 😁Once I lost our company an hour of sales at peak hour because it was considered unhealthy. The database was being slow and the health check were failing, rightfully so. Because I knew nothing about configuring Kubernetes correctly, it was killing the pods thinking they need to be restarted to be healthy. Little did I knew this was adding more pressure to other pods and were making the issue worst. The database was the problem and now we no longer had enough instances to handle the traffic properly on top of that. Learn the difference between liveness probe (does it need to be restarted?) and readiness probe (should it receive traffic?). Putting your health check on the liveness probe might cost you a lot of money!!! 🙀
Thanks for this Nick. I have been using these health checks for a few projects now and can't live without them. Glad you're bringing more people into the fold.
This is also nice because you can set up different checks, and how deep they go. You can use a very high level check for a status dashboard, and then a lower-level check for the load balancer, and then even a lower-level one for the main service of a container, etc. It's really flexible.
I kind of knew health checks existed but I haven't used them in the past. I am in the process of writing an app at the moment that would really benefit from using them so I'll be looking into integrating it into the project this week.
Nick, I checked out your courses. I plan on doing them in the future if you send out promo codes. $80+ USD is expensive for a course, I don’t doubt they are worth every penny but courses are used for familiarizing developers with information. And in todays world of teachable and Udemy $80+ is expensive for one course. Standing by for newsletter promos.
Thanks great video once again. Do you think you can do a video about your previous role as Software engineering manager ? pro and cons of the role, and what makes a good software engineer manager ? Thanks!
Didn't know about that library support, but do you have a video on how to limit access to internal servers? Or is it just a matter of making the requireHost?
Hi Nick, Nice informative video tutorial as always. I learnt many things from your content especially regarding API. I have one question. Is there any possibilities to health check the worker service which is hosted in remote server? Your comment is much appreciated. Thanks.
Hello Nick! Thanks for this great video! Wanted to ask you if you know what is invoking health check endpoint periodically -- period is 1 minute. And Request logging middleware is not processing those health check requests, so it must be something inside the application. And each times in logs I see the following string tracked with severity "Debug": "Running health check publishers". Many thanks if you (or anybody else) could help me with answering this stuff...
Great video. How does the memory usage look when the health checks are used periodically? Is this a lightweight function or does it leave objects in memory?
Great content as always. Do you have the projects that you show around here hosted anywhere? I was actually searching for your movies api on github and didn't found it. I was looking for a piece of code that you probably already have shown on another video but couldn't figure out which of them 😅
You gotta be careful with this as it doesn't play well with a lot of infrastructure out of the box. A lot of times, health checks are required to verify startup of a service which means if some dependencies are down, it could prevent the successful reporting of your service startup to something like an orchestrator (e.g. kubernetes) which could cause a chain of service restarts. I've even seen secondary effects with Anthos (Istio) where changes to the cluster get blocked for 30 minutes because it's unable to verify the last change which is this service that's not coming up because the health check is either failing or otherwise unhealthy. Additionally, I would urge you to not use this same endpoint to be the health check endpoint used by your load balancers to check backend health, or at least not without some additional considerations. Oftentimes, those health checks need to be lightning fast or it will timeout and lead to it being taken out of the pool. Our solve involved us having to insert some middleware to handle HEAD requests specifically and immediately respond with 200 without running any deeper tests and then making sure the LBs were using HEAD requests specifically. In retrospect, it may have been better to even separate these "health checks" in this video to be on /health for public (e.g. loadbalancers, responds immediately) and /healthz for internal (the report/deeper/etc)
Most of time services have more than one downstream dependency and are using them in different ways on diferent endpoint. Rare are the situation where all your endpoints depend on all of dependencies. In the example fron video if you decide to say I'm unhealthy if db is down, you wont be able to serve that isvaligithubusername requests. Nic did mentioned degraded state but that doesnt tell anything to the k8s. Diferent way to undestand this is that health of service means its own health, not dependencies. If dependency is down I'll throw 500, big deal. Health check is used when pipeline spins new version of pods and trafic needs to be redirected from old version to new version of pods. Boot of the app takes time, and all that time new pod is being pinged on /health to check if it can recieve trafic. When it says ok, trafic goes to new one, old ones die.
We implement healtchecks, but within a controller and MediatR and inside the handler we make SQL select RabbitMQ message publishing So basically „manually“
Could you please share a link for the naming convention? everything i found on the web is naming the endpoint "/health" ... i know this is just a detail but would be great to know this anyways :D
Cool video, thanks for sharing. I have been using health checks but I have a question. Say you are using shared dependencies like Cosmos DB or 3rd party API, if those go down then all of your apps will be marked as unhealthy. What is the best way to handle this?
The health check API can be created for cron jobs ? (a utility which runs periodically and doesnt have UI however can create a domain and have a health status endpoint)
Thank you for the great tip and video. Could you or someone please direct me to an example of adding some server performance parameters to the health checks return json report, like current workload, number of request in the past x time, concurrent requests, average response time, etc. I plan to use this data with my supervising/orchestrating server for fault tolerance routing, load balancing and also serve data to my dashboard app showing all my servers well-being and operation. UPDATE: Never mind, just found your other video "Creating Dashboards with .NET 8’s New Metrics!" exactly what I needed. Thanks. (not deleting my post, maybe will be useful for somebody else)
I have a number of "plugins" that may or may not be present in app. Is it possible to add more health checks "on the fly" after those plugins have been loaded ?
Great tool to have. Does anyone use any kind of monitoring service to actively monitor these endpoints though? I'm looking to implement this on several client APIs and would love to have something that can just monitor all of them and email me if something changes from Healthy to Degraded or Unhealthy. Haven't really found anything that will do that yet and am hoping to avoid needing to roll my own monitoring app.
I have one question: what’s the best way to call these checks? If you don’t wanna use batch jobs running most likely on the same servers as the app you wanna check. Is it best to call them using “external” tools like Selenium or Control-M?
I have found that connecting to the database every time causes issues with the connection session pools when using Oracle when run in a Kubernetes pod so our DBAs have forbidden us to call database connection checks.
Awesome video as per usual! I do have one caveat though, if I want the health checks to appear in my Swagger automagically, how would I go about doing that without having to reimplement larger parts myself?
I am using them and they are great but how do I test them in integration tests (not unit test) ? I mean how can I fake the response this checks get from the services or resources ?
I see all the packages discussed at the end of the video use the HealthCheckRegistration object for registering IHealthCheck classes. I've tried passing a timeout down but it doesn't seem to obey my timeout. For example I've tried passing in TimeSpan.FromSeconds( 2 ) as the time out but the health check still runs for the default Redis timeout length of 5 seconds before returning. Has anyone successfully used the timeout property of HealthCheckRegistration and is there something special that needs to be done?
I see the problem. The current nuget package for Redis healthchecks is early '22. The current source code shows that it is using timeout as I'd expect. However, when I decompiled the current nuget package, I see that the source is different and they aren't leveraging the time out.
Didnt know about health checks. How would you go about checking those though? Have a sevice that calls the endpoint within set intervals and send a mail or somthing?
Yep! Also, for docker + orchestration you generally get this out of the box. Some cloud apps also use health checks and plug in to this pattern pretty easily
After configuring these. Should I have some other separate monitoring app or service periodically calling the health endpoint and notifying me if it's down?
Well yes thats one way to know if your service is healthy. Projects that handles huge traffic usually has some proxy api acting like load balancing. Those load balancing APIs can call the /health and would know which instance is healthy and up
Those are passive healthchecks - they run only when you call the endpoint. I've seen another implementation - they are triggered periodically (e.g. every minute) and endpoint gets you the last results
Yes I knew. And it’s a really tricky problem with k8s if you’re using external cloud services that then fail. It creates a recycle loop because it’s down for the pods but the pod itself is healthy. It’s the services that aren’t and no amount of recycling will fix it. Needs to be something more for k8s like “healthy but screwed” that you can throw notifications on but doesn’t cause k8s to do try and recover.
I have a funny one: the health check BackgroundService creates an empty file every 5 mins, and if Kubernetes won't see this file for 15 mins it restarts the pod
ATTENTION! If you are running a database which works on demand for cost optimization, keep in mind that if you use this method to check its heath, the database will be pinged in every few seconds and will be up all the time. Been there... :)
Comments like these are pretty valuable. Thanks for sharing!
So, what's the recommendation (alternative) in this case ? 🤔
It’s mentioned in the video. Rate limiting or auth to ensure that you’re the only one calling those endpoints in acceptable intervals
Good point, it's easy to put it in strict intervals as Nick mentioned
@@amnesia3490 how can i define this strict intervals? thanks!
Health checks are important, I have to give a warning when designing them though. K8s/Nomad/ECS etc. generally take the view that "unhealthy" services need to be restarted/reallocated. If your services make other services and database dependencies part of their health checks, when those dependencies have issues or network outages, you can cause all your services to crash and get yourself into a thundering herd of service restarts and failures across your entire system.
They can also create a startup sequence for your services, generally a bad thing when working with distributed systems. ie. Y service cannot start and pass its health check until X service is running.
Keep this in mind when designing your health checks.
Sage words of wisdom
How do we solve this?
@@johncerpa3782 As with everything, configure k8s to give it some time before shutting it off, and think about your whole service infrastructure. k8s gives you two options, a "alive" health check to see if your application is running, and a ready health check to se if you are ready to receive traffic, with lots of options for each of them :)
Thanks for this man
Sure with this iirc you can create different endpoints. One would be a container-level service check for k8 and one could be an application service level for the load balancer/target group, etc.
I've been a software developer for almost 5 years. Worked on projects for ~5 different companies. Never knew this existed. Now I might need to bring this up at work... I can't imagine how much time this could save us. Great stuff Nick. Thanks!
Blows my mind. I didn't realise there was such a simple but powerful tool for health checks built in already. Thanks for demonstrating!
I was planning on creating 2 methods in my APIs.
1, "Ping" shows the API is "alive".
2. "HealthCheck" digs down into all of the "dependencies".
I'm thanking God I came across your videos first. No matter what you want to build, there is probably a library or API that already does it. Thanks for another great video.
You know not only does Nick make my c# so much better , his way of developing is cross transferable to other languages. As soon as I've mastered things like this, I then go looking for the same type of things in rust, go & erlang. Thank for being brilliant mate you're a star
100% Agree! If I can suggest: become a Patreon :-)
Very cool, wouldn't mind seeing a followup video to explain how to actually have a alerting tool connected to it.
I double that suggestion
I like using Uptime Kuma as the client for healthchecks. My scenarios are fairly simply and all I want is a notification which it's great for
I knew health checks were a thing, but didn't know how comprehensive and easy they are to setup. So thank you! I was curious to see more about the authorization options on the _health endpoint. You said it could be "network based" but you didn't show an example of that.
Nice and clean Nick, Thanks for this amazing content..buuuut the applications HealthCheck shouldn't check components external to itself, because if the DB fails, it doesn't make much sense to redeploy all the containers that use that DB, doing this won't bring up the DB because the The problem is not in the application container but in the DB.
What is suggested is to establish a HC for the DB, although for infrastructure issues a monitoring or alarm system works better, because it is not as simple as just creating a new Postgres or a new Rabbit, etc.
You don't necessarily have to restart the service. Just raise an alert and have someone handle it manually or, as shown with the health check response writer, you can get a detailed overview of what's borked so you can automate the actions.
Love how short yet so valuable your videos are
Thanks a lot! But I still have a question - how to implement health checks in console applications? I have workers in my project and I need to be sure that they are alive
I’ve been using the mentioned packages as well. They even have a UI package to enable an visual overview of health checks. I run this UI in my k8s cluster to get a nice detailed overview of the micro services health.
Hi, what's the name of the Ui Package?
I'm Brazilian, i really love your content!
Thanks Nick! A lot of people explain how to use these, but so far you are the only one I have seen that has mentioned how to secure it. On the Microsoft microservices shop on containers they use the UI as well. It is pretty cool.
Health checks are really important. You know what else is important? More podcast episodes!! 😛
Nice introduction to health check, super useful 😁Once I lost our company an hour of sales at peak hour because it was considered unhealthy. The database was being slow and the health check were failing, rightfully so. Because I knew nothing about configuring Kubernetes correctly, it was killing the pods thinking they need to be restarted to be healthy.
Little did I knew this was adding more pressure to other pods and were making the issue worst. The database was the problem and now we no longer had enough instances to handle the traffic properly on top of that. Learn the difference between liveness probe (does it need to be restarted?) and readiness probe (should it receive traffic?). Putting your health check on the liveness probe might cost you a lot of money!!! 🙀
I was aware but hadn't seen them implemented. Thanks, just started a fresh API project, that's going in up front.
Nick, you are the best. God bless you with more success. Thank you for sharing great stuff with us.
Thanks for this Nick. I have been using these health checks for a few projects now and can't live without them. Glad you're bringing more people into the fold.
This is also nice because you can set up different checks, and how deep they go. You can use a very high level check for a status dashboard, and then a lower-level check for the load balancer, and then even a lower-level one for the main service of a container, etc. It's really flexible.
I know the health checks and I use it in my projects. But I am unaware of the packages. Thank you for introducing new things to us.
New skill has unlocked.! Thanks Nick!
I had no clue that this was built in. I will add this when I return to work.
The best on Health check ever, thanks Nick.
never heard of such a tool, definitely gonna try using these in my project
Didn't know about it. Nice! Thanks for sharing it with us!
Didn't know that in such detail and comprehensive implementation. Amazing!
very helpful, so glad to know there are now available packages to ease our healthchecks. Thanks for sharing :)
More awesome content, brilliant thanks. Will be implementing this in the services we're currently building.
This was really useful information. Thank you very much Nick!
I kind of knew health checks existed but I haven't used them in the past. I am in the process of writing an app at the moment that would really benefit from using them so I'll be looking into integrating it into the project this week.
Good to know this. Nick is spoiling us !!!!!! Lol
As usually a super simple explanation of matters that everybody should know. Just perfectly!
Good video, I didn't know about the health checks. Thanks for sharing your knowledge.
can you do video on healthcheck strategies? e.g. what to put on startup checks, readiness and live
Nick, I checked out your courses. I plan on doing them in the future if you send out promo codes. $80+ USD is expensive for a course, I don’t doubt they are worth every penny but courses are used for familiarizing developers with information. And in todays world of teachable and Udemy $80+ is expensive for one course. Standing by for newsletter promos.
Excellent video Nick
Thank you Nick. This works!
Nice video, didn't know about this tbh...really neat, thanks for sharing!
Great video! Would be also good to see how would you implement health checks in azure functions isolated.
Thanks for sharing good practices! Maybe you should make a video of all useful 3rd party packages used by you and other trusted sources 🙏
I didn't know this I'm gonna use it in my project
Thank you for this great video!
This is great. Thanks.
Thanks great video once again. Do you think you can do a video about your previous role as Software engineering manager ? pro and cons of the role, and what makes a good software engineer manager ? Thanks!
Another excellent informative video. My only comment would be... it would be nice if these updates found their way into your courses as well
They do. The REST API course covers healthchecks
Didn't know about that library support, but do you have a video on how to limit access to internal servers? Or is it just a matter of making the requireHost?
Hi Nick, Nice informative video tutorial as always. I learnt many things from your content especially regarding API. I have one question. Is there any possibilities to health check the worker service which is hosted in remote server? Your comment is much appreciated. Thanks.
Great one Nick
Would you make a video for null object pattern?
Awsome content 🙏🏼
Awesome video, thanks
Hello Nick! Thanks for this great video!
Wanted to ask you if you know what is invoking health check endpoint periodically -- period is 1 minute. And Request logging middleware is not processing those health check requests, so it must be something inside the application. And each times in logs I see the following string tracked with severity "Debug": "Running health check publishers". Many thanks if you (or anybody else) could help me with answering this stuff...
Great video. How does the memory usage look when the health checks are used periodically? Is this a lightweight function or does it leave objects in memory?
Great video!
Didn't know about this. Good one
Are you speed talking right to the end of the video? LOL. Great content by the way
Great content as always.
Do you have the projects that you show around here hosted anywhere?
I was actually searching for your movies api on github and didn't found it. I was looking for a piece of code that you probably already have shown on another video but couldn't figure out which of them 😅
I actually just checked that it's on patreon rigth?
Yeah all the code in my videos is on Patreon
@@nickchapsas A suggestion for future videos, would enjoy to see you talk about specification pattern.
Would you expand on this and show how to configure something like nginx running as a LB to detect unhealth endpoints and reroute traffic?
Can you explain Null Object Pattern?
You gotta be careful with this as it doesn't play well with a lot of infrastructure out of the box.
A lot of times, health checks are required to verify startup of a service which means if some dependencies are down, it could prevent the successful reporting of your service startup to something like an orchestrator (e.g. kubernetes) which could cause a chain of service restarts. I've even seen secondary effects with Anthos (Istio) where changes to the cluster get blocked for 30 minutes because it's unable to verify the last change which is this service that's not coming up because the health check is either failing or otherwise unhealthy.
Additionally, I would urge you to not use this same endpoint to be the health check endpoint used by your load balancers to check backend health, or at least not without some additional considerations. Oftentimes, those health checks need to be lightning fast or it will timeout and lead to it being taken out of the pool. Our solve involved us having to insert some middleware to handle HEAD requests specifically and immediately respond with 200 without running any deeper tests and then making sure the LBs were using HEAD requests specifically. In retrospect, it may have been better to even separate these "health checks" in this video to be on /health for public (e.g. loadbalancers, responds immediately) and /healthz for internal (the report/deeper/etc)
This is a great comment, definitely some things to consider. I followed your advice as it made a great deal of sense...
Most of time services have more than one downstream dependency and are using them in different ways on diferent endpoint. Rare are the situation where all your endpoints depend on all of dependencies. In the example fron video if you decide to say I'm unhealthy if db is down, you wont be able to serve that isvaligithubusername requests.
Nic did mentioned degraded state but that doesnt tell anything to the k8s.
Diferent way to undestand this is that health of service means its own health, not dependencies. If dependency is down I'll throw 500, big deal. Health check is used when pipeline spins new version of pods and trafic needs to be redirected from old version to new version of pods. Boot of the app takes time, and all that time new pod is being pinged on /health to check if it can recieve trafic. When it says ok, trafic goes to new one, old ones die.
This maps quite ok to kubernetes restart policies. But a lot more implementation is needed
Excellent video Nick. Can I get link to the code ?
Is there some nuget package to get an UI on those ? That could be cool.
We implement healtchecks, but within a controller and MediatR and inside the handler we make SQL select
RabbitMQ message publishing
So basically „manually“
Would you return Degraded instead of Unhealthy if there were still requests you’d be able to handle without the database?
Are there some pre-built systems to interact with these health checks or would you make your own?
The "calling" of the health check is part of a different service, for example DataDog alerting or K8s health endpoints
Create a database connection for every http connection? Does not sound like a good way to me.
What are your thoughts on monitors that run periodically (for example every 10 minutes) and check for invalid data for certain states or db entries?
Could you please share a link for the naming convention? everything i found on the web is naming the endpoint "/health" ... i know this is just a detail but would be great to know this anyways :D
Cool video, thanks for sharing. I have been using health checks but I have a question. Say you are using shared dependencies like Cosmos DB or 3rd party API, if those go down then all of your apps will be marked as unhealthy. What is the best way to handle this?
The health check API can be created for cron jobs ? (a utility which runs periodically and doesnt have UI however can create a domain and have a health status endpoint)
Thank you for the great tip and video.
Could you or someone please direct me to an example of adding some server performance parameters to the health checks return json report, like current workload, number of request in the past x time, concurrent requests, average response time, etc. I plan to use this data with my supervising/orchestrating server for fault tolerance routing, load balancing and also serve data to my dashboard app showing all my servers well-being and operation.
UPDATE: Never mind, just found your other video "Creating Dashboards with .NET 8’s New Metrics!" exactly what I needed. Thanks. (not deleting my post, maybe will be useful for somebody else)
I have a number of "plugins" that may or may not be present in app. Is it possible to add more health checks "on the fly" after those plugins have been loaded ?
Awesome! very informative.
We don't use health checks. We love surprises! (lol)
Great tool to have. Does anyone use any kind of monitoring service to actively monitor these endpoints though? I'm looking to implement this on several client APIs and would love to have something that can just monitor all of them and email me if something changes from Healthy to Degraded or Unhealthy. Haven't really found anything that will do that yet and am hoping to avoid needing to roll my own monitoring app.
I have one question: what’s the best way to call these checks? If you don’t wanna use batch jobs running most likely on the same servers as the app you wanna check. Is it best to call them using “external” tools like Selenium or Control-M?
I have found that connecting to the database every time causes issues with the connection session pools when using Oracle when run in a Kubernetes pod so our DBAs have forbidden us to call database connection checks.
Hi @nickchapsas how you can add/implement a healthcheck for a Ftp server?
what if I have a simple worker(non-web) application? is there an elegant way to provide HTTP endpoint with health without converting it to WebHost?
very neat!
Awesome video as per usual! I do have one caveat though, if I want the health checks to appear in my Swagger automagically, how would I go about doing that without having to reimplement larger parts myself?
You would just register the health check middleware after the swagger one
@@nickchapsas Does that work even with a custom response writer?
I am using them and they are great but how do I test them in integration tests (not unit test) ? I mean how can I fake the response this checks get from the services or resources ?
I see all the packages discussed at the end of the video use the HealthCheckRegistration object for registering IHealthCheck classes. I've tried passing a timeout down but it doesn't seem to obey my timeout. For example I've tried passing in TimeSpan.FromSeconds( 2 ) as the time out but the health check still runs for the default Redis timeout length of 5 seconds before returning. Has anyone successfully used the timeout property of HealthCheckRegistration and is there something special that needs to be done?
I see the problem. The current nuget package for Redis healthchecks is early '22. The current source code shows that it is using timeout as I'd expect. However, when I decompiled the current nuget package, I see that the source is different and they aren't leveraging the time out.
Didnt know about health checks. How would you go about checking those though? Have a sevice that calls the endpoint within set intervals and send a mail or somthing?
Yep! Also, for docker + orchestration you generally get this out of the box. Some cloud apps also use health checks and plug in to this pattern pretty easily
There a way to redirect this request to other service when health check not pass ?
great stuff
thx a lot!
Nick -how to get emails via health checks?
After configuring these. Should I have some other separate monitoring app or service periodically calling the health endpoint and notifying me if it's down?
Well yes thats one way to know if your service is healthy. Projects that handles huge traffic usually has some proxy api acting like load balancing. Those load balancing APIs can call the /health and would know which instance is healthy and up
Wow, did I miss anything? Since when can you do
CancellationToken cancellationToken = new() ?
No default? No CancellationTokenSource?
This feature was added in C# 9
@@nickchapsas I'm not talking about new(), I'm talking about creating Token directly, not through the CTSource.
@@sergeybenzenko6629 CancellationToken is a struct. All structs have the default constructor. It is the same as using default
new CancellationToken() - or equivalently CancellationToken.None produces a cancellation token which never will be cancelled.
weird timing, I was *just* learning about this and implementing it
Those are passive healthchecks - they run only when you call the endpoint. I've seen another implementation - they are triggered periodically (e.g. every minute) and endpoint gets you the last results
Yes I knew. And it’s a really tricky problem with k8s if you’re using external cloud services that then fail.
It creates a recycle loop because it’s down for the pods but the pod itself is healthy. It’s the services that aren’t and no amount of recycling will fix it.
Needs to be something more for k8s like “healthy but screwed” that you can throw notifications on but doesn’t cause k8s to do try and recover.
How to add it to console app? ;)
I have a funny one: the health check BackgroundService creates an empty file every 5 mins, and if Kubernetes won't see this file for 15 mins it restarts the pod
great
The term "Health Check" reminds me of that crappy React + Fluent "PC Health Check" applet from MS.
"in Any .NET App" is not true