Pega Interview Questions & Answers 5

Поділитися
Вставка
  • Опубліковано 9 січ 2025

КОМЕНТАРІ • 19

  • @vivek7187
    @vivek7187 10 місяців тому +3

    Please implement question 22 scenario. Thank you.

  • @purushothviswa8843
    @purushothviswa8843 3 місяці тому +1

    Good video sir .keep doing the same we need implementation scenario for 22

  • @saikrishnakandukuri147
    @saikrishnakandukuri147 10 місяців тому +2

    Please implement the scenario for updating the Old data with New data.

  • @RameshRam-yh4tk
    @RameshRam-yh4tk 10 місяців тому +2

    Please implement remaining questions also

  • @PraveenKamble-m4j
    @PraveenKamble-m4j 9 місяців тому +1

    Thanks a lot for your Videos. Really helpful. Please make implementation video.

  • @piyush_mate_videos
    @piyush_mate_videos 9 місяців тому +1

    please implement this 22

  • @rahulsam4564
    @rahulsam4564 9 місяців тому +1

    Please implement for 22 scenario

  • @PraveenKay-wq2ut
    @PraveenKay-wq2ut 9 місяців тому +1

    Question: Have to fetch millions records from external system and create cases. What is the most efficient way to do this. Please suggest.
    In the current video as explained We would fetch the records using JobSchedulers and use ootb create new case activity in the Qprocessor to create the cases. but is there any other efficient way to achieve this.

    • @pegalearnnow3655
      @pegalearnnow3655  9 місяців тому +2

      Good Question. We can do in many ways. I feel below is better solution for this situation.
      In this scenario, we can utilize the Data Flows to process the millions of requests for case processing.
      1. SetUp Kafka, So the requesters will add their records to Kafka topics.
      2. In Pega, we can setup Data Flow which use DataSet configured to the Kafka topic. After that we DataFlow activity can insert the records into one Data Type table with the request Json & status, After that it will do Queue-For-Processing to queue to one QP.
      3. That QP has to create the cases and once the case is created/failed, Then update the Data Type table with the status of success or failure or retry for further steps.
      Data Flows are best way to handle huge volume of records. which will queue for QP, And then after Queue Processors can handle that load sequentially.

    • @PraveenKay-wq2ut
      @PraveenKay-wq2ut 9 місяців тому +1

      Looks fair! Thank you.

  • @badesab1598
    @badesab1598 10 місяців тому +1

    Thank you

  • @rohithgoud3817
    @rohithgoud3817 9 місяців тому

    In that OOTB pzBulkProcessItemsInHarness(Final Rule) activity on 17th step there is a java code which is satisfying jump condition and moving to 22 step however ill have to move to 20 and 21st step because there a Q-F-P in 21 step

  • @seshagirich6624
    @seshagirich6624 6 місяців тому

    For question 23 we can call the service in queue processor but in a different thread right,so can we use the same data in the main thread?is it possible?

  • @sportsvideos5034
    @sportsvideos5034 8 місяців тому +1

    For 22, why can’t we simply fetch all the old records in one page and fetch the new records insert them into table after that delete the old record Instead of setting flag and looping multiple times ????

    • @pegalearnnow3655
      @pegalearnnow3655  8 місяців тому +1

      Yes, you are right. We can do that if we don’t have key constraint and records with global id as primary key . In fact, we need to loop through the backup page list also to delete old records. But I like your idea if we don’t care about returning old record data or new record data.

  • @piyush_mate_videos
    @piyush_mate_videos 9 місяців тому +1

    please implement this 22

  • @piyush_mate_videos
    @piyush_mate_videos 9 місяців тому +1

    please implement this 22

  • @piyush_mate_videos
    @piyush_mate_videos 9 місяців тому +1

    please implement this 22

  • @piyush_mate_videos
    @piyush_mate_videos 9 місяців тому +1

    please implement this 22