Very helpful video. It brings light to many questions like even if we have streaming enabled why the entire data is loaded in memory and also useful tips on how to avoid it. Appreciate the effort
Awsome video - as always! To be honest... When you were talking about Repeatable In-Memory streams I thought you would add 600 additional rows to the SQL database to demonstrate the exception ;-) (max in-memory instances is set to 500 in your example). It would be fun to see the exception :) But anyway - very valuable video! Thank you!
Hi Vishwas, thanks for explaining so well. I have one question. When after transformation we have to append data to file it will again reach to it's original size that is 1gb and it will consume heap memory right?
Hi Vishwas, thanks for the valuable sessions, I have a doubt, why is the flow got errored out when an Iterable streaming object is returned. Please clarify on why you've used Transform in the end of the flow. Thank you
I don't understand the use of these repeatable streams. Why are we writing into the file again in file store streams. Isn't the duplication of data and ending with reading the same data again. I worked on streams earlier in java where we read a chunk of data from file, process it before reading the next chunk of data from file again. This way I am not using anything other than memory to process the entire file and works with unlimited size of file. I don't understand how the same can be achieved using streams in Mule 4. Can this be achievable?
At 4 .00 minutes you said request will process one after the other in streams , if i get 100 requests which demands 1 st row in parallel, does the requests process one after other . If yes then performance gets impacted right? Only 1st requests will execute faster . Pls correct my understanding
Every request gets picked up by a separate thread depending upon the Max concurrency of the flow, so each request would get its own stream instance. So if 2 requests are being processed in parallel, they both will have 2 different instances of streams running in parallel.
Thanks for Explaining! Question regarding how to process files having 1 GB data with CSV records using For-Each loop without parsing the whole content?
From the video, what I gather is we will use a File Connector to read the file using Non Repeatable In Memory Stream, Then use a Chice to see if the isEmpty(payload) is false, then in a for each loop having a batch size as suitable, we will use a transform message to perform the transformations inside the for each loop. That way as the stream processes and the data is available to for each , the operations will still continue. But since we never used an operation that requires the entire payload at the same time, we will be spared from landing the entire file in memory.
Vishwas...you said 500 objects max can be stored in-memory as per the config, and we have 6 records in DB, so can each record be considered as object or this set of 6 records are considered one object, please clarify. Great job!!
I am new to Mule arena, and the streaming concept, that you have presented is impacabale. can not be better than this. Keep it Up.
Thanks Vishwas, crystal clear concept explanation.
The best explanation I have come across. Thanks Viswas.
Your videos are awesome and it helps to get deeper understanding about mule. Appreciate your efforts. Thanks lot
Great video. 👍
Very helpful video. It brings light to many questions like even if we have streaming enabled why the entire data is loaded in memory and also useful tips on how to avoid it. Appreciate the effort
Things are explained quite nicely. Thanks.
Awsome video - as always! To be honest... When you were talking about Repeatable In-Memory streams I thought you would add 600 additional rows to the SQL database to demonstrate the exception ;-) (max in-memory instances is set to 500 in your example). It would be fun to see the exception :) But anyway - very valuable video! Thank you!
Excellent Video.. Explained it well
Clear explanation.. thanks!
thanks thats very beautifully expalined
Hi Vishwas, thanks for explaining so well. I have one question. When after transformation we have to append data to file it will again reach to it's original size that is 1gb and it will consume heap memory right?
Thanks for the explanation
Hi Vishwas, thanks for the valuable sessions,
I have a doubt, why is the flow got errored out when an Iterable streaming object is returned. Please clarify on why you've used Transform in the end of the flow.
Thank you
do you have a github repo?
good explanation
I don't understand the use of these repeatable streams. Why are we writing into the file again in file store streams. Isn't the duplication of data and ending with reading the same data again. I worked on streams earlier in java where we read a chunk of data from file, process it before reading the next chunk of data from file again. This way I am not using anything other than memory to process the entire file and works with unlimited size of file. I don't understand how the same can be achieved using streams in Mule 4. Can this be achievable?
In repeatable file stream, where exactly it will store the file, is it in vCore memory or outside of app
Persistent storage of the cloudhub worker.
@@Vishwasp13 It means it will user vCore memory right ?
Memory usually refers to volatile memory, it will store the file in non volatile memory i.e persistent disk storage.
@@Vishwasp13 Got it. Thank you.
grate explanation. if you would show this practically with db or file which would help a lot ,
Hi power , it is more conceptual . Can you explain concept with example. One real time senario. Which helps more.
At 4 .00 minutes you said request will process one after the other in streams , if i get 100 requests which demands 1 st row in parallel, does the requests process one after other . If yes then performance gets impacted right? Only 1st requests will execute faster . Pls correct my understanding
Every request gets picked up by a separate thread depending upon the Max concurrency of the flow, so each request would get its own stream instance. So if 2 requests are being processed in parallel, they both will have 2 different instances of streams running in parallel.
@@Vishwasp13 thanks
Thanks for Explaining! Question regarding how to process files having 1 GB data with CSV records using For-Each loop without parsing the whole content?
From the video, what I gather is we will use a File Connector to read the file using Non Repeatable In Memory Stream, Then use a Chice to see if the isEmpty(payload) is false, then in a for each loop having a batch size as suitable, we will use a transform message to perform the transformations inside the for each loop. That way as the stream processes and the data is available to for each , the operations will still continue. But since we never used an operation that requires the entire payload at the same time, we will be spared from landing the entire file in memory.
Wonderful video!! Can you do video on one way SSL and two way SSL ? Appreciated!!
Thanks, I'll try to make one.
Vishwas...you said 500 objects max can be stored in-memory as per the config, and we have 6 records in DB, so can each record be considered as object or this set of 6 records are considered one object, please clarify. Great job!!
Each record is one single object.