Qtometa Live session- Learning How to Produce and Consume Messages to Kafka topic in Pega

Поділитися
Вставка
  • Опубліковано 9 лют 2025
  • Qtometa Live session- Learning How to Produce and Consume Messages to Kafka topic in Pega #pega
    For complete pega training contact @ Qtometa@gmail.com
    Follow us on:
    LinkedIn: / qtometa
    Instagram: / qtometa
    Twitter: / qtometa
    Facebook: / qtometa Support us with a small contribution: buymeacoffee.c...

КОМЕНТАРІ • 9

  • @rajatarora5404
    @rajatarora5404 Рік тому +1

    It is a very useful session. Hope to see more in future.

  • @ReyMH123
    @ReyMH123 Рік тому +1

    Very nice video!

  • @madhubolla6250
    @madhubolla6250 Рік тому +1

    very informative

  • @everything1039
    @everything1039 Рік тому

    Hi sir KAfka messaging system used only in Queue processor right ...not for job scheduler was that correct

    • @Qtometa
      @Qtometa  Рік тому

      Job Schedulers were introduced in Pega 8 to replace advanced agents. Job Schedulers leverage Kafka, which increases throughput and improves performance at the database level.

  • @Come_with_me9
    @Come_with_me9 Рік тому

    Hi sir, Can I learn Pega now, there is scope ? Please guide

    • @Qtometa
      @Qtometa  Рік тому +1

      Yes, you can learn pega. market is little bad now but should correct in sometime.

  • @javvajisatish
    @javvajisatish Рік тому

    what happens once the message read is completed? In OOTB scenario, if we restart the server, data flows will start and try to read all the messages. if we dont delete the messages once those are read, those message may get processed again which will cause issue. So how the messages can be deleted once read is completed?

    • @Qtometa
      @Qtometa  Рік тому

      Message can delete once the time is expired or need to find a way to delete once we read but that can be problematic and it can create problem in exception handling. we should try to setup readonly new messages, else it can re-read few messages which is not expired on restart. It can depend on use case as well, like if we are re-reading but we have logic to ignore duplicate in processing then no issues else just read new one. We have option in data flow to just read new one.