I do not have enough experience with loguru to give an education opinion. I think it works really well to hide a lot of boilerplate from the default module and gives a TON of options. However, I've never had the need for such a setup, even in enterprise code bases. Colors and such a great if you keep reading the output in a console, but in a more streamlined setup logs are usually gathered/pushed to an external service such a sentry, datadog, Azure Log Analytics, AWS Cloudwatch, GCP Cloud Logging or equivalent services. With a good alerting setup you only need to query and search the logs when something breaks.
@@Timnology-r4s Yes. The thing that is important to me is to avoid writing useless code and reading the docker console output in the early stages of deploy.
The content quality is amazing. Complete, concentrated and straight to the point. Awesome, thank you!
Thanks for the positive feedback!
Subscribed! Your explanation is very professional, thank you from Chile 🇨🇱
High quality videos, congratulations!
Well done! I've been using the logger module to AGES and didn't know 25% of this stuff. Thanks for explaining it so well!
That you, and glad you found it useful!
i'm loving your vids, maybe something like metrics would be right up your alley
I do have a video planned about metrics and tracing (e.g. Sentry, Datadog, Prometheus, otel etc). I'm still in the brainstorming phase though :)
Good content!
Thank you so much for this great and high quality content.
Please increase the editor font size a little bit.
Subscribed
Will do!
Would be cool if you did a video about how Pythin logging would also work with OpenTelemetry
That sounds like a solid topic, I'll give it some thought. Thanks for the suggestion!
what do you think about loguru?
I do not have enough experience with loguru to give an education opinion. I think it works really well to hide a lot of boilerplate from the default module and gives a TON of options.
However, I've never had the need for such a setup, even in enterprise code bases. Colors and such a great if you keep reading the output in a console, but in a more streamlined setup logs are usually gathered/pushed to an external service such a sentry, datadog, Azure Log Analytics, AWS Cloudwatch, GCP Cloud Logging or equivalent services. With a good alerting setup you only need to query and search the logs when something breaks.
@@Timnology-r4s Yes. The thing that is important to me is to avoid writing useless code and reading the docker console output in the early stages of deploy.
Cool, a familiar face on UA-cam.
Can't say I wasn't inspired by a familiar face :)