Avoiding long running HTTP API requests.

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 45

  • @CodeOpinion
    @CodeOpinion  10 місяців тому

    Want to learn more about software architecture and design? Join thousands of developers getting weekly updates!
    🚀mailchi.mp/63c7a0b3ff38/codeopinion

    • @zumanoka3310
      @zumanoka3310 10 місяців тому

      Is there any sample code to show this pattern?

  • @HylandorVT
    @HylandorVT 10 місяців тому +23

    Another option, if available, is the caller provide a callback so the long running process can tell the caller when to fetch

    • @marna_li
      @marna_li 10 місяців тому +9

      Yes. "WebHook" style

    • @RicardoSilvaTripcall
      @RicardoSilvaTripcall 10 місяців тому +1

      Would have proposed the same, in finance, we use it a lot when integrating with partners and customers ...

    • @CodeOpinion
      @CodeOpinion  10 місяців тому +8

      Agreed. Totally wanted to mention this and forgot!

    • @ShikaIE
      @ShikaIE 10 місяців тому

      I’d only do this if client is within my control as it adds complexity if i have to handle in case of client’s “webhook” failure.

    • @edgeofsanitysevensix
      @edgeofsanitysevensix 10 місяців тому

      I like this because it hands off some of the responsibility of dealing with responses to the client. They must specify where they want the response to be sent

  • @nitrovent
    @nitrovent 10 місяців тому +13

    If it has to be some REST style API, I'd do both. 1. POST the id to the service to enqueue/start the conversion. 2. Have a GET endpoint like /status?id={id} to query the status and do polling or let the user refresh the status manually in the UI. Otherwise some kind of messaging via a queue could be nice.
    The id in the GET could also be a job id the POST would return.

    • @morespinach9832
      @morespinach9832 10 місяців тому

      Polling is expensive. Have SSE.

    • @lizard450
      @lizard450 10 місяців тому

      ​@@morespinach9832Expensive how? Sorry if I'm a bit agitated but I am of the opinion that expensive should be a banned word in dev teams.
      Expensive in dev time?. expensive in time to market? Expensive in Cheap resources like ram disk space and bandwidth? Expensive in resources I don't care about like the browser of the client?
      Polling is stupid simple and can get you to market quickly and reliably. No mind bending needed for a dev team to adapt from request response to publish subscribe thinking. No additional technology integration.
      At the end of the day no one cares if a v1 feature was done optimally. If the feature is worth optimizing the business will request it.
      Not to mention SSE can be blocked on some of your customer's machines. Antivirus perhaps. Same with raw web sockets.
      So using web sockets or SSE is a bad architecture for a LOT of web apps. You're far better off using something like signalr or socket.io
      But it's never a bad idea to opt for good old polling for v1.

  • @dzikiLOS
    @dzikiLOS 10 місяців тому +6

    One thing that I would change here is to use server sent events for updates and http for initiating processes. In my experience websockets tend to get very chatty and maintaining one way communication from server to client and easier and simplier

    • @9akashnp8
      @9akashnp8 10 місяців тому

      I agree.

    • @morespinach9832
      @morespinach9832 10 місяців тому

      Absolutely. SSE better than web sockets. But how about web hooks?

  • @JamesQQuick
    @JamesQQuick 10 місяців тому +4

    Ooooh love the take! 100% agreed on long standing requests. One of my issues is trying to solve this problem in serverless where I don't have access to process queues, cron jobs, etc. Thanks for doing this take!

    • @gigantedocil
      @gigantedocil 10 місяців тому

      Hey, @JamesQQuick, a bit confused by your take. Why wouldn't you have access to all of those things? If you're on AWS for example with a API Gateway + Lambda serverless setup you can still push to SQS or SNS and you can have scheduled Lambdas acting as CRON jobs.

    • @adambickford8720
      @adambickford8720 10 місяців тому

      If anything, you should be using queues *more* in serverless to avoid blocking the resource thats billed by the hour.

  • @KristjanBroderLund
    @KristjanBroderLund 10 місяців тому

    I like the original question posed in the video. Different scenario: If the url to the transcript a canonical url that should be cached, and you want to create the transcript lazily (on demand), e.g. If the url is /video/{id}/transcript, you'd probably want to have it long running until it's done and cache the result for subsequent requests.

  • @05xpeter
    @05xpeter 8 місяців тому

    Your are completely right about the solutions, but I will say I have seen plenty of examples of people making overcomplicated asynchronous architectures for sometime they can resolve synchronously well within the 60 seconds timeout. Expecially combined with good explanations in the UI. Generally I always ask my self 3 times if I REALLY need to make a asynchronous solution in this case, since it bring so many problems with it.

  • @adambickford8720
    @adambickford8720 10 місяців тому +2

    Also, make your UI reflect the processing. Don't be afraid to show a queue and status to the user vs trying to pretend its really a blocking req/resp interaction.

  • @bjornsandberglynch5806
    @bjornsandberglynch5806 10 місяців тому

    If you're not using websockets and there's a reasonably accurate way to estimate the task duration (in this example, video file size) you can provide an ETA for completion to the client. This helps e.g. sizing the progress bar on the client and avoids incurring server load by starting to poll for a result that isn't available yet.

  • @joelv4495
    @joelv4495 10 місяців тому +4

    Any long running process is screaming for a job queue. So the HTTP handler should validate the request, add it to the queue, and return the job id or maybe a url where the status can be fetched.

  • @vitomedlej3923
    @vitomedlej3923 10 місяців тому +3

    Very insightful, thank you.

  • @kennedymwenda3357
    @kennedymwenda3357 3 місяці тому

    Now that we're talking about e-commerce, do we reduce stock levels immediately the order is submitted or asynchronously? If asynchronously how we alert for item out of stock if inventory ran out during the asynchronous process by another shopper?

    • @CodeOpinion
      @CodeOpinion  3 місяці тому

      You'll get an email that you're order can't be processed/completed. Often a product being out of stock isn't a sales problem is a purchasing problem. Often times a product might sold that isn't actually on hand, but there are a PO that is to be received that will fulfill the order. There's a concept called "Available to Promise"

  • @georgehelyar
    @georgehelyar 10 місяців тому

    If any replica of the consumer is able to pick up the result later then 202 accepted followed by an event on a message broker, which could then go to a webhook if the consumer is external, but if you want it to go back to a specific consumer then you might have to deal with asynchronous request/reply which can add complexity e.g. you want the response to go back to a specific consumer and that consumer may disappear before the response is published, and then you might have to clean up any claim check payload when it eventually dead letters etc.
    Wouldn't use polling though.
    If it really has to go back to a specific consumer and if they disappear you can just stop then just make it synchronous. Could use gRPC here and maybe even stream the response back as it becomes available.
    Maybe even RTMP depending on the use case (e.g. if you are streaming subtitles to a media player then don't reinvent the wheel)

  • @ransandu
    @ransandu 10 місяців тому +1

    Option 1: Queue/topic (pub/sub)
    Submit the request with a traceable/trackable id to a queue. (caller become the publisher)
    transcript generation process subscribes to this queue.
    then the caller subscribes to a transcript completed event from the transcript generation process.
    use the traceable id to correlate the incoming messages to grab the transcript.
    Option 2: Request/WebHook
    Make a fire-and-forget request to the transcript generation service (with a traceable id).
    The transcript generation service then posts the result to a webhook exposed by the service that made the request. Use the traceable id to correlate messages.

  • @mravacado1587
    @mravacado1587 3 місяці тому

    How can you draw the line between "long running" and "very long running"? If the endpoint returns in less then 10 seconds for example, is it ok to return the results without the need for pulling?

  • @ali-celebi
    @ali-celebi 9 місяців тому

    Great video. Thought provoking! :)

  • @minhduccao9955
    @minhduccao9955 10 місяців тому

    what if in the business process we got an exception throw from the code, should we use email to communicate back to client about this ? any thought on this

  • @Airbone69
    @Airbone69 7 місяців тому

    What do you think about the client subscribing to a message on a response queue

  • @thedacian123
    @thedacian123 10 місяців тому +1

    What about using javascript SSe instead of websockets for sending resource URI to the js client?

    • @CodeOpinion
      @CodeOpinion  10 місяців тому +1

      Yup, absolutely could use server-sent events.

    • @mrt181
      @mrt181 10 місяців тому +1

      @@CodeOpinion yes, SSE need more love

    • @ojcieccc
      @ojcieccc 10 місяців тому

      Yes! They're easy to handle also in server-server scenarios even if you don't have libraries for that. I guess in any language that lets you handle HTTP/TCP

  • @PaulPendor
    @PaulPendor 10 місяців тому

    53 seconds into the video, and I am going to suggest utilising a WebHook pattern to return the result to the caller once the process is complete. Initial request is a quick POST.

  • @skipodap1
    @skipodap1 10 місяців тому

    Great video, thank you

  • @nobel03
    @nobel03 10 місяців тому

    If we use websockets, what happens if the client misses the response due to connection issue? Should websocket messages be persisted and client can continue reading messages from last starting point like a chat session?

    • @CodeOpinion
      @CodeOpinion  10 місяців тому

      Sure, depending on the tech being used. You could require the client to ACK the message and have the server retry etc.