Web-Queue-Worker Architecture Style for Scaling

Поділитися
Вставка
  • Опубліковано 9 лип 2024
  • Web-Queue-Worker is an excellent architecture pattern you can add to your toolbox. It's just a pattern and can work with a monolith, modular monolith, microservices, or whatever. It provides many benefits for scaling by moving work into the background and if you have long-running jobs, workflows, or even recurring batch jobs.
    🔗 EventStoreDB
    eventsto.re/codeopinion
    🔔 Subscribe: / @codeopinion
    💥 Join this channel to get access to a private Discord Server and any source code in my videos.
    🔥 Join via Patreon
    / codeopinion
    ✔️ Join via UA-cam
    / @codeopinion
    📝 Blog: codeopinion.com
    👋 Twitter: / codeopinion
    ✨ LinkedIn: / dcomartin
    📧 Weekly Updates: mailchi.mp/63c7a0b3ff38/codeo...
  • Наука та технологія

КОМЕНТАРІ • 34

  • @cdnkarasu
    @cdnkarasu 2 місяці тому +7

    This exactly sums up our main product. I've always felt it doesn't quite fit monolith entirely and isn't microservices. Great overview of this valid style.

  • @Omer23007
    @Omer23007 2 місяці тому

    Just as usual great video!
    Thanks again for sharing

  • @GigAHerZ64
    @GigAHerZ64 2 місяці тому +3

    I worked with a service exactly like that. We had sometimes huge spikes of incoming HTTP requests and we had to be able to take them in. At the same time, it was not particularly necessary to process them immediately. Our HTTP API endpoint was async with callback URL given in the request payload. Every request was independent and didn't depend on anything from other requests.
    In the HTTP request receiving part we created and saved a "processing context object" into CosmosDB and put the document ID into queue. (Azure Storage Queue)
    The workers listened on the queue and when any of them got an ID, it loaded the context object from the CosmosDB to start processing it. The worker itself had to call other external services and it allowed us to not flood our other partners as we had fixed amount of workers set up. The HTTP request receivers auto-scaled to hundreds, when needed.
    Queue scaled really well with small payloads, CosmosDB scaled really well with queries based on only primary keys, etc. Everything worked really well.
    We called it a "multi-instance monolith" or "multi-tier monolith".

    • @CodeOpinion
      @CodeOpinion  2 місяці тому +2

      Nice! Queue workers can be helpful rate limiting as you mentioned for downstream.. assuming you limit how many messages processing concurrently. Beauty of a queue.

    • @GigAHerZ64
      @GigAHerZ64 2 місяці тому

      @@CodeOpinion correct! Forgot to mention that workers themselves limited their consumption rate. Good point!

  • @juststudying1019
    @juststudying1019 2 місяці тому +6

    Amazing channel

  • @_rcs
    @_rcs 2 місяці тому

    Thanks for always being so pragmatic and realistic.

  • @ErichHafenmaier
    @ErichHafenmaier 2 місяці тому +1

    Great video. I started using this pattern with the system I'm currently building/maintaining. We actually started using Hangfire in our .NET API for our background jobs (both scheduled and fire/forget) but quickly got into a situation where it became very difficult to work on locally due to several scheduled background jobs firing off once you ran the project locally. Decided to migrate those jobs to .NET worker services with Azure Service Bus to handle the messages and it's been a game changer. We typically use emails to do client notification but I'd like to try and incorporate some type of real-time streaming log of what the worker is doing but haven't got there yet.

    • @piotrkowalski3460
      @piotrkowalski3460 2 місяці тому +1

      I do the same thing with Hangfire. I configured scheduled jobs to use Cron.Never when running on development environment

  • @andreashe36
    @andreashe36 2 місяці тому

    This video is like a dump from my mind. Great. So kinda acknowledgement that I am on the right way.

    • @CodeOpinion
      @CodeOpinion  2 місяці тому

      Glad it could be of some validation!

  • @creamyhorror
    @creamyhorror 2 місяці тому +2

    Honestly this just seems like the background queue/worker part of typical web frameworks. Rails, Laravel, etc. all have specific components for offloading slower tasks to workers via a work queue. There's also the notification step to handle, where the workers inform trigger a notification to the application or end user that the work is done so that follow-up steps can be started.

    • @ArinSinabian
      @ArinSinabian 2 місяці тому

      I am not familiar with rails or a laravel. But I suspect the queue is not persisted in those frameworks if they don't use a database. The difference here is you use a broker which persist the messages and handles acknowledge from the worker. So if you restart your web server or worker the queue is still there and if the worker is half done you can continue processing. The toools provided by laravel or rails are of course useful but you must be aware if they have drawbacks. It might be useful for some cases and not for other.

    • @pdgiddie
      @pdgiddie 19 годин тому

      At least for Rails, these jobs are persisted in the db. I think my main issue with these job queue features for web frameworks is that task code is quite fat/boilerplatey. Doing this with a message broker is more flexible in my experience. But ultimately the pattern _is_ pretty much the same.

  • @Tony-dp1rl
    @Tony-dp1rl 2 місяці тому

    I think every second video now on this channel is about queues

    • @CodeOpinion
      @CodeOpinion  2 місяці тому

      There are a bunch that's for sure. Probably a combination of EDA, ES, Queues. I almost wanted to say I have more about boundaries. Curious, what would you like to see covered instead.

  • @ernest1520
    @ernest1520 2 місяці тому

    This is very much how typically event-driven architecture is implemented, with one difference in that rather than using queues (for commands) they use topics/exchanges (for events). The overall flow is more or less the same.

    • @CodeOpinion
      @CodeOpinion  2 місяці тому

      Yup, ultimately. Just to add to that you won't generally take an http API call and immediately publish an event without anything actually occurring. Meaning events are published based on something occurring. *You can model requests as events in some situations but that's another video. Thanks for the comment

  • @bernhardkrickl5197
    @bernhardkrickl5197 2 місяці тому

    We use this architecture with a message bus to send back events from the worker to the clients.

    • @CodeOpinion
      @CodeOpinion  2 місяці тому +1

      Yup! Another option are webhooks depending on if they accept them. Depends what your clients are.

  • @H0b0Gamers
    @H0b0Gamers 2 місяці тому

    Would you have multiple queues for different types of jobs?
    For example, we are implementing a queue for generating large pdfs and its FIFO. Those pdfs are not needed immediately.
    I want to move other heavg logic to the queue, but that data is needed much faster. I can have thousands of pdfs generating and dont want to bottleneck the work i need done faster.

  • @tmbarral664
    @tmbarral664 2 місяці тому

    pain point: yep, one. something you only slightly talked about : the job done feedback. Dunno about AWS, but on GCP, with a LB to spread the load, the WS will timeout after 30s.
    So if you plan to use WS to get notify, you're better have jobs running below 30s. Pretty annoying.
    if you have ideas how to circumvent this, I'd glad to hear it ;)
    Cheers.

  • @thedacian123
    @thedacian123 2 місяці тому

    I did not understand how worker comunicates with the httpapi ,about the processing result.Thank you!

    • @CodeOpinion
      @CodeOpinion  Місяць тому

      Libraries used for websockets communication will abstract this from you if you're in a distributed environment where it uses an underlying backplane. For example with SignalR it will use Redis for this. So if you dispatch a message to a client/group from a worker, it's ultimately hitting the HTTP API that holds the websockets connection to the client.

  • @sultanmohammad2135
    @sultanmohammad2135 2 місяці тому

    So, this entire system can be called a distributed system?

  • @thomaseichinger1717
    @thomaseichinger1717 2 місяці тому

    uuhh.. yeah! I like this pattern but it is very difficult.

  • @zumanoka3310
    @zumanoka3310 2 місяці тому +1

    Derek, is it possible to see the code for the patterns you discussed somewhere?

    • @CodeOpinion
      @CodeOpinion  2 місяці тому +1

      You can access to that code if you join the channel. Not a lot of videos have code shown because it's generally more abstract and you can apply it depending on the language/platform you're using.

  • @andrewiecisa2907
    @andrewiecisa2907 2 місяці тому +4

    I use this pattern with the help of masstransit library. The only gotcha I found is the size of the payload may by greater than the message capacity.

    • @BetrayedEnemy
      @BetrayedEnemy 2 місяці тому +1

      Im having the Same Problem. I Just Put the Data to be used in another place and referencing it inside of the Message

    • @CodeOpinion
      @CodeOpinion  2 місяці тому +1

      Yup. Solution is the Claim Check Pattern: ua-cam.com/video/peMRaht4Cwc/v-deo.html