MuleSoft || Mule-4 Batch Processing - How to Process the Failed Records?

Поділитися
Вставка
  • Опубліковано 30 чер 2024
  • This video demonstrates and explains the technique of creating a Batch Process and introduces the technique to isolate and process the failed records. This video also explains the File Naming convention to avoid losing a file for records coming in quick succession.
    The Entire Flow is given here. Please download and Practice:
    drive.google.com/file/d/1YFqy...
    The FileName Format is given below:
    #['File' ++ ( now() as String {format:'yyyy-MM-dd-hh-mm-ss'} ) ++ '-' ++ (random() * 1000) ++ '.txt' ]
  • Наука та технологія

КОМЕНТАРІ • 42

  • @AZeeee
    @AZeeee 3 роки тому

    Excellent! Very neat with the try catch.

  • @dineshbabu4919
    @dineshbabu4919 4 роки тому

    Great sir🔥✌️ one of the important topic in Mule 4

  • @snehak9176
    @snehak9176 4 роки тому

    Very Informative Video!

  • @emailyreddy
    @emailyreddy 4 роки тому

    Good one, I do appreciate your contribution

  • @muraligoud2961
    @muraligoud2961 4 роки тому

    Great work sir

  • @tkraju2008
    @tkraju2008 4 роки тому

    Excellent and its very useful.

  • @aravindvenkatesh9961
    @aravindvenkatesh9961 3 роки тому

    Nice knowledge sharing Siva

  • @trbryant3
    @trbryant3 4 роки тому

    Excellent video!

  • @thedeveloper2513
    @thedeveloper2513 4 роки тому

    Another great video. Error handling in batch job is very important.

  • @pradeepchennam4469
    @pradeepchennam4469 4 роки тому

    Thanks for the video
    Can you make a video on jenkins CI/CD for onprem mule stand-alone servers
    I saw a video on cloud auto deployments but not on onprem

  • @nagendrakumar1387
    @nagendrakumar1387 3 роки тому

    For each also iterates one by one ...then what is difference between for each and batch other than single thread and multi thread @ siva thankamanee

  • @jaydixit2482
    @jaydixit2482 4 роки тому

    ["2", "4","abc","8","12","14"] with this input ...abc is also coming in success file ..as well as in error file. ,I am trying to exclude this invalid record from success file...any clue

    • @pragun1993
      @pragun1993 Рік тому

      That's a nice catch. I had the same doubt. You can check my reply comment on comment posted by @saitejam1614

  • @tkraju2008
    @tkraju2008 4 роки тому +1

    Hi Siva, I have worked on example. I am unable to get 2 success files and 1 failure file. But I am able to get 1 success file and 1 failure file only . Please tell me where i did the mistake

    • @sivathankamanee-channel
      @sivathankamanee-channel  4 роки тому

      Hi Raju - thanks for your appreciation. Did you check the batch aggregator size set to 3?

    • @tkraju2008
      @tkraju2008 4 роки тому

      yes

    • @tkraju2008
      @tkraju2008 4 роки тому

      in success scenario i am able to get 2 success files

    • @bharathkumarnj
      @bharathkumarnj 4 роки тому

      Hi Siva - i'm also facing same issue. Mule version is 4.2.2

    • @saitejam1614
      @saitejam1614 4 роки тому

      Same issue for me as well on 4.2.2
      It worked well on 4.2.0

  • @saitejam1614
    @saitejam1614 4 роки тому

    After the On Error Continue, the failed record is as well passed into the batch aggregator.
    So all 6 records are being created in success file and 1 record is created in failed file.
    Any idea on how to fix?

    • @sivathankamanee-channel
      @sivathankamanee-channel  4 роки тому +1

      Hi Saiteja - It depends on the use case. The error records are critical for transaction and aggregation is for historical purposes. It looks perfectly ok. What use case you are trying to simulate?

    • @saitejam1614
      @saitejam1614 4 роки тому +1

      @@sivathankamanee-channel
      Just wanted to inform you that error records are being aggregated in the batch aggregator as well.
      (In your aggregator, you see 3 first time, 3 second time, but I see 3 first and second time)
      Donno why this is happening though.
      But using filter function I removed them and continued processing the success ones
      Maybe change in version caused this. Anyways thanks for quick reply.

    • @pragun1993
      @pragun1993 Рік тому +1

      ​@@saitejam1614 Hi, I was about to comment this. Nice catch!
      This is happening because Siva has used On Error Continue. Be default(even if you don't handle error in a Batch Step), if a record fails, it's not considered for aggregation. But when you are using On Error Continue, Try scope will pass it on as a success without error to next component which is aggregator stage in this case, and hence it will be aggregated.My guess is whatever the payload is returned after Publish, that 'returned payload' will be aggregated with other records and would be present in one of the success files. Siva has shown one success file, when he showed the On Error Continue example. Not both. I'm sure that 'returned payload' was lying in the other success file after being aggregated.
      If you truly wanted the isolation of error records, I find On Error Propagate the best candidate here. Because anyways, errors will not disturb rest of the batch job. Error records would be published to VM Queue and not aggregated(and hence not written to success file).
      Also better to have Max Failed Records as -1 for the Batch Job Configuration so that Batch Job don't stop in case you more than one batch steps.
      @sivathankamanee-channel Please correct me if my understanding is wrong.

  • @fresher123Man
    @fresher123Man 3 роки тому

    Hi Sir, For example if it fails inside Batch aggregator try won't handle it right? And there are options to select in batch step like NO_FAILURES, ONLY_FAILURE... How can we use that in better way? Please reply.. as this is one of my requirements

    • @shashanksingh4325
      @shashanksingh4325 2 роки тому

      Yes, maxFailedRecords as -1, & new BS with ALL failed records can be used for error-handling records

  • @mayuria2447
    @mayuria2447 3 роки тому

    Sir here instead of file if we are writing into database if the data base down while writing the records what happens there how we need to handle data

    • @shashanksingh4325
      @shashanksingh4325 2 роки тому

      Either re-try mechanism or publish on some Q for later processing, but if DB is consistently down for allr records, then using maxFailedRecords ,handling-logic can be added accordingly (depending on requirement)

  • @prangyanjaliparida9035
    @prangyanjaliparida9035 4 роки тому +1

    This is working fine when i passed all integer value as one array ,But when i passed string value middle of array ..it stop execution after that string value

    • @sivakumarmahadevanpillai5188
      @sivakumarmahadevanpillai5188 4 роки тому

      Same issue I faced. I used another Batch Step to capture only error using the accept policy ACCEPT_FAILURES and published errors into the VM Queue. Then it is working fine

    • @shashanksingh4325
      @shashanksingh4325 2 роки тому

      Check maxFailedRecords once, above option is also good

  • @rakeshmatch4681
    @rakeshmatch4681 4 роки тому

    i am using for each and i see that for each error record it is creating a error record file ..what am i doing wrong

    • @shashanksingh4325
      @shashanksingh4325 2 роки тому

      Bcoz ForEach doesn't uses aggregator(uses 1 record individually) & Batch aggregator aggregates all records & then creates 1 file at once (for all) - Write is in BA in above use-case

  • @varunsai5750
    @varunsai5750 2 роки тому

    Hello sir please upload order wise sessions

  • @hellothere848
    @hellothere848 4 роки тому

    how about processing failed records in separate Batch step that processes only failed records

    • @sivathankamanee-channel
      @sivathankamanee-channel  4 роки тому

      That's the idea. Once you have the specific record in the error or failed folder or VM you can do any process as you require. If required you can introduce aggregator and process them collectively. In my view it's not a great design when you postpone or processing failures collectively.

    • @hellothere848
      @hellothere848 4 роки тому

      @@sivathankamanee-channel Actually you don't have to provide error handling in Batch Step 1; You set max error records to -1 for Batch Job; Then introduce Batch Step 2 to process ONLY_FAILURES records policy.

    • @vp17in
      @vp17in 4 роки тому

      @@sivathankamanee-channel
      Thank you so much for preparing and presenting such a wide range of topics in Mule. This is no easy task as a working professional. Hats off to you.
      Batch internally flags the failed records and makes it available in the subsequent batch steps. We can access the failed records using the "Batch Filter" - Accept Policy attribute. By introducing VM queues not only we are introducing more components/complexity, but it will also add to performance overhead. Also, we are not postponing handling failures as Batch is asynchronous and multi-threaded and batch does not wait for all records to be processed in one step, it keeps moving records to the next steps (asynchronous). The whole point of batch processing is to perform bulk (collective) processing which includes handling any failures.
      Flows/Approach should be simple by design while leveraging all features that Mule provides us.

    • @shashanksingh4325
      @shashanksingh4325 2 роки тому

      @@hellothere848 Yes, it can be done this way as well,