Advanced Concepts in Mule | Streams | Repeatable | Non Repeatable

Поділитися
Вставка
  • Опубліковано 30 чер 2024
  • Often during file-based integrations you might encounter out of memory errors, mule provides different streaming strategies to counter such errors.
    This video helps you understand the streams used in Mule in detail with a demo.
    ⏱ Video Timestamps
    ==========================
    0:00 Start
    0:30 Concept of Streams (Collections vs Pagination vs Streams)
    4:30 Chaining of Streams
    5:25 Modules supporting Streaming
    5:46 Streaming strategies
    6:30 Non Repeatable Streams/Iterable
    8:40 Repeatable In Memory Streams/Iterable
    13:00 Repeatable File store streams/Iterable (Pardon the title in the Slide which says In memory)
    15:50 Avoid abusing streams
    19:15 Demo
    📌 Related Links
    ==========================
    🔗 Cache in Mule 4: • Mulesoft 4 | Cache Sco...
    🔗 Transactions in Mule 4: • Mule 4 | XA | Local Tr...
    🔗 Classloading Isolation in Mule 4: • Refined Error handling...
    🔗 API Design Best Practices: • API Design | Best Prac...
    🔗 Custom Policy: • Mule 4: Create Custom ...
    🔗 API Gateway and Autodiscovery: • Mule 4: API Gateway an...
    🔗 Global Error Handler: • Mule 4: Create a Globa...
    🎬 Popular Mule 4 Playlists
    ==========================
    💥 Advanced Concepts in Mule: bit.ly/AdvancedMule
    💥 Mule 4 Custom Connectors: bit.ly/Mule4CustomConnectors
    💥 Dataweave Series: bit.ly/dataweave2
    Let's connect:
    =========================
    💥 Twitter: / vishwas_p13

КОМЕНТАРІ • 37

  • @atulverma.515
    @atulverma.515 3 роки тому +3

    I am new to Mule arena, and the streaming concept, that you have presented is impacabale. can not be better than this. Keep it Up.

  • @bibekbazaz
    @bibekbazaz 3 роки тому +1

    Very helpful video. It brings light to many questions like even if we have streaming enabled why the entire data is loaded in memory and also useful tips on how to avoid it. Appreciate the effort

  • @mjpraveen34
    @mjpraveen34 3 роки тому +1

    Your videos are awesome and it helps to get deeper understanding about mule. Appreciate your efforts. Thanks lot

  • @RamKrishna-vm3mp
    @RamKrishna-vm3mp 3 роки тому

    The best explanation I have come across. Thanks Viswas.

  • @debottamguha1344
    @debottamguha1344 3 роки тому +1

    Things are explained quite nicely. Thanks.

  • @rohitKumar-hu2ez
    @rohitKumar-hu2ez 6 місяців тому

    Thanks Vishwas, crystal clear concept explanation.

  • @letsshopeasy
    @letsshopeasy 3 роки тому

    Clear explanation.. thanks!

  • @SimpletravelGirl
    @SimpletravelGirl 3 роки тому

    thanks thats very beautifully expalined

  • @rithulkumar1387
    @rithulkumar1387 3 роки тому

    Excellent Video.. Explained it well

  • @jacekbiaecki8076
    @jacekbiaecki8076 2 роки тому +1

    Awsome video - as always! To be honest... When you were talking about Repeatable In-Memory streams I thought you would add 600 additional rows to the SQL database to demonstrate the exception ;-) (max in-memory instances is set to 500 in your example). It would be fun to see the exception :) But anyway - very valuable video! Thank you!

  • @jerrytom4499
    @jerrytom4499 3 роки тому

    Thanks for the explanation

  • @harshtamishra5473
    @harshtamishra5473 3 роки тому +2

    Hi Vishwas, thanks for explaining so well. I have one question. When after transformation we have to append data to file it will again reach to it's original size that is 1gb and it will consume heap memory right?

  • @sudheerraja3059
    @sudheerraja3059 3 роки тому

    grate explanation. if you would show this practically with db or file which would help a lot ,

  • @kotteramanareddy4331
    @kotteramanareddy4331 Рік тому

    good explanation

  • @mohanbapuji
    @mohanbapuji 2 роки тому

    Hi Vishwas, thanks for the valuable sessions,
    I have a doubt, why is the flow got errored out when an Iterable streaming object is returned. Please clarify on why you've used Transform in the end of the flow.
    Thank you

  • @michaelj1743
    @michaelj1743 3 роки тому

    Wonderful video!! Can you do video on one way SSL and two way SSL ? Appreciated!!

    • @Vishwasp13
      @Vishwasp13  3 роки тому

      Thanks, I'll try to make one.

  • @manishjoshi8529
    @manishjoshi8529 3 роки тому

    Thanks for Explaining! Question regarding how to process files having 1 GB data with CSV records using For-Each loop without parsing the whole content?

    • @bibekbazaz
      @bibekbazaz 3 роки тому

      From the video, what I gather is we will use a File Connector to read the file using Non Repeatable In Memory Stream, Then use a Chice to see if the isEmpty(payload) is false, then in a for each loop having a batch size as suitable, we will use a transform message to perform the transformations inside the for each loop. That way as the stream processes and the data is available to for each , the operations will still continue. But since we never used an operation that requires the entire payload at the same time, we will be spared from landing the entire file in memory.

  • @nagachittoory6632
    @nagachittoory6632 3 роки тому

    Vishwas...you said 500 objects max can be stored in-memory as per the config, and we have 6 records in DB, so can each record be considered as object or this set of 6 records are considered one object, please clarify. Great job!!

    • @Vishwasp13
      @Vishwasp13  3 роки тому

      Each record is one single object.

  • @sreenivasulu3623
    @sreenivasulu3623 Рік тому

    Hi power , it is more conceptual . Can you explain concept with example. One real time senario. Which helps more.

  • @manikondapraveen2213
    @manikondapraveen2213 3 роки тому

    I don't understand the use of these repeatable streams. Why are we writing into the file again in file store streams. Isn't the duplication of data and ending with reading the same data again. I worked on streams earlier in java where we read a chunk of data from file, process it before reading the next chunk of data from file again. This way I am not using anything other than memory to process the entire file and works with unlimited size of file. I don't understand how the same can be achieved using streams in Mule 4. Can this be achievable?

  • @mohan1vamsi
    @mohan1vamsi 3 роки тому

    At 4 .00 minutes you said request will process one after the other in streams , if i get 100 requests which demands 1 st row in parallel, does the requests process one after other . If yes then performance gets impacted right? Only 1st requests will execute faster . Pls correct my understanding

    • @Vishwasp13
      @Vishwasp13  3 роки тому +1

      Every request gets picked up by a separate thread depending upon the Max concurrency of the flow, so each request would get its own stream instance. So if 2 requests are being processed in parallel, they both will have 2 different instances of streams running in parallel.

    • @mohan1vamsi
      @mohan1vamsi 3 роки тому

      @@Vishwasp13 thanks

  • @ashokvarma2203
    @ashokvarma2203 3 роки тому +1

    In repeatable file stream, where exactly it will store the file, is it in vCore memory or outside of app

    • @Vishwasp13
      @Vishwasp13  3 роки тому

      Persistent storage of the cloudhub worker.

    • @ashokvarma2203
      @ashokvarma2203 3 роки тому

      @@Vishwasp13 It means it will user vCore memory right ?

    • @Vishwasp13
      @Vishwasp13  3 роки тому

      Memory usually refers to volatile memory, it will store the file in non volatile memory i.e persistent disk storage.

    • @ashokvarma2203
      @ashokvarma2203 3 роки тому

      @@Vishwasp13 Got it. Thank you.