Thanks for the nice video. I'm new to rabbitMq couple of questions to understand it better 1. What happens in RabbitMQ if all the consumer apps are down? will the queue persist the message until one of the consumer comes online? 2. In rabbitMQ, how does broker knows about the eventType? does it read the. payload or there are any headers to define the eventType? 3. If i were to have a blue-green deployment model for my consumer apps. Can we define smart rules in rabbitMq broker to send events to blue or green stack dynamically ? 4. If my rabbitmq queue consumer died during the processing, can another consumer re-process the same message ?
Hi Manoj, Please find my response: 1. Yes. Queue does persist and they will keep on pilling messages until the consumer drains the messages out of them. 2. Usually this is done using the routing key. RabbitMQ uses this key to route to specific consumer queues. Using wildcard, we can easily configure one or more event Type to be consumer in a specific consumer side queue. 3. Unfortunately, once a message is consumed its drained from the queue. You can have multiple instance of the app to consume from the same queue. that works. But you wont be able to replay the message if something goes wrong in your green instance and you want to roll back. You will have to take care of those messages from your app's end. 4. There is a Commit message which the consumer has to send to remove that particular message from the queue. If the consumer died and did not send commit for a particular message, then yes another consumer can re-process the same message
We see a lot of comparison videos but most are very shallow and focused on jargons. This is one of the videos focused on practical use cases. Nicely done!
I was searching about the difference at so many places, but you made the search a full stop. This is no concrete and to the point Explanation. Thanks a lot mate. Subscribed for more. Thanks again , this take a lot of efforts to make such content, much appriciated . :)
Appreciate the video :D I think for event distribution in case the events are decoupled you can create two separate topics in KAFKA 1. TOPIC_EVENT_1 2. TOPIC_EVENT_2 Once created, you can create two consumer groups 1. CONSUMER_TOPIC_EVENT_1_AND_2 2. CONSUMER_TOPIC_EVENT_2 Hence, you can add first consumer group to both 1st and 2nd topics, second to only the 2nd topic
Yeah, I guess he meant topic as well. Partitions for a message is decided on basis of the key provided by the producer and assigned to consumers randomly inside a consumer group.
@@MrXperx Partitions are part of a topic. One topic can have many partitions. If we want to separate events into different partitions for a single topic we can use the event type as the hash key to ensure different event types go into different partitions. This way different consumers of a consumer group can consume different event types from different partitions.
Thanks much for putting efforts to explain things in a simpler way as always. I wish you get more viewers/subscribers that what you have right now. Only one suggestion from my end is please try to post more videos. I know it's not easy as I said, as it takes lot of your time. Thanks again!
Glad you found it useful Nishanth. Viral content is not why I'm here on youtube. :) Even if 5 people watch and found it useful and "relevant", thats my success. Thanks for staying connected and providing feedback. You should see improvements in frequency and quality soon. ;)
Thanks for the detailed video. At 07:57 , Kafka has high throughput and highly scalable. What is the comparison with Mb/sec and Req/sec and also what makes RabbitMQ having less throughput is it handling of new messages or storage layer?
Liked. Well done, and well spoken. Question about your last point on the Kafka limitations about producer coordination regarding partitions. In my experience, producers produce to a topic, not to a partition. And, Kafka evenly distributes the messages across the partitions within the topic. In that scenario, producers don't know or care about how many partitions there are. Are you referring to a mode where producers specify or know about the partitions?
I have not worked on RabitMQ just have done a few pocs, but Kafka hands down is the most powerful tool i have seen in my career. Spring gives out of the box integration with Kafka and u can plan ur fill application on these two monsters. Kudos !!! to u dude as always :-)
very well explained. You are really doing a wonderful job. These videos help a lot in understanding the concepts. I wish you good health and lots of success.
I think the example for RabbitMQ in the 10:40 minute is not right. You mentioned that "Sales & Inventory" sends the "order.cancelled" and can be consumed by "billing" and "fulfillment". But, isn't that RabbitMQ cannot replay messages? My understanding says to me that if, for example, "billing" reads the event order.cancelled, then "fullfullment" will miss the message since is not going to be there anymore.
Hello! Which one would you use in a case where changes in a specific field in an Oracle database table need to be captured (Change Data Capture) and forwarded to a push notification system like Firebase? Any advices will be appreciated. Thank you.
Kafka has topics and topics have configurable amount of partitions, not sure what the video said regarding partition wrapping events, sounds technically incorrect. Also rabbit mq can be durable.
Thanks for the nice video! One thing is not clear, what do you mean by "streaming" in kafka? Can you show some messages in action with and without streaming? TCP/IP always streams data, so what do you mean by streaming in kafka? Also RabbitMQ also support pub/sub so the pub/sub example you showed can fit to both the tools.
Very good background. However I think it comes to the scenario you presented (good choice of scenario), the options presented are NOT close to real world scenarios being implemented. If someone is doing that, they don't understand the cost and technical implications from choosing either Kafka option 1 and option2. ... It would be good if you can dig deeper into the situations where there is combination, failures, and you are not offloading everything into Kafka.
In 6:02 you can't guarantee which event goes to which partition in Kafka, so you may also have event 1 and event 2 in partition 2 The default is round-robin fashion when key is not set, once can use specific client implementation that sets the partition, so it is not possible to do it via producer-console
Thanks for the very well summed up video. Here are few queries for me 1.Since kafka is cluster, we have high availability.Is there a way of having high availability for Queue? 2.If kafka is down for a while,even with 1 broker,it can get all messages from the saved files of local system ,is it possible to recover mssgs if quqe is down for a while ?
RabbitMQ also supports clusters for HA. Rabbit MQ also has persistence. So if a broker goes down and comes back up it can pick up the messages from the persistent store and start processing them.
If you see the example i explained in between, where app2 can have separate queue with its own event topics. That’s what i mean by smart broker with routing rules
I still can't find any reason to not use Kafka. Not that I'm biased towards it, I'm actually trying to find a use case that Kafka doesn't fit but other message queues do. But the reasons listed here and elsewhere don't satisfy me.
In a scenario where we have multiple publishers and a single consumer (2type of events only) what is your suggestion? Kafka or rabbitMQ or neither one just something like grpc. I am researching to choose a good tech for Log gathering system. we have a lot of loggers and the server should store all logs in the database.
In theory both are more or less the same. However, Queues once drained, you cannot get the messages back. But in Kafka, you can get the messages if you have them retained. Kafka uses consumer groups to differentiate different consumers. RabbitMQ uses Queues to segregate consumers. This mean each consumer can get their own copy of messages (in both these platforms)
Nice video..We have one use case of using both queue and message streaming Platform..Since Kafka provides smart consumer and doesn't provide queue capability..so we are in process of deciding azure event hub for streaming and azure service bus for queue..But we are also thinking for an option of apache pulsar since it it heavy and not widely used,,we are not thinking for it..What is your suggestions for this use case please? Highly appreciate your response..
Great video, I had a question, how would you decide between choosing rabbitmq with multiple queues vs rabbitmq with direct exchange (where we can specify routing keys)?
Rabbit is mostly used for doing async services inside the microservice ecosistem (same subnet or namespace in kubernetes) with multiple queues. Between ecosistems (different subnets) is where rabbitmq topics are used. Remember inside the same ecosistem using topics increase the amount of compute you need and the risk of crash
Thnx bro for ur efforts to help us with latest trends in technologies. Would be great to have some career based video in IT. ♥️♥️♥️♥️♥️Luv ur work... Keep it up bro.
I'm just another developer like you Ashutosh. :) If i create career based videos, it will digress from the channel's core ideology of learning new Technologies. Appreciate you requesting it.
Thanks for your effort creating and sharing this video. Why do you say that RabbitMQ only works with push based approach ?, as far as I know RabbitMQ supports pull based approach too.
Sorry your explanation does not go the depth of the use cases. For example what do you mean by Rabbit MQ is pub sub kind of model 1:18, then u say used for point to point communication and request response. If it is pub sub it is not point to point, it is publisher publishing to a specific end point with many subscribers consuming from it. So u need to elaborate on than that before moving on. Similarly u need to explain how smart broker works for routing the traffic in Rabbit MQ. Kafka has topics, and brokers, so what do brokers do in KAFKA, they do not route the messages/traffic.
Thanks for the nice video. I'm new to rabbitMq couple of questions to understand it better
1. What happens in RabbitMQ if all the consumer apps are down? will the queue persist the message until one of the consumer comes online?
2. In rabbitMQ, how does broker knows about the eventType? does it read the. payload or there are any headers to define the eventType?
3. If i were to have a blue-green deployment model for my consumer apps. Can we define smart rules in rabbitMq broker to send events to blue or green stack dynamically ?
4. If my rabbitmq queue consumer died during the processing, can another consumer re-process the same message ?
Hi Manoj,
Please find my response:
1. Yes. Queue does persist and they will keep on pilling messages until the consumer drains the messages out of them.
2. Usually this is done using the routing key. RabbitMQ uses this key to route to specific consumer queues. Using wildcard, we can easily configure one or more event Type to be consumer in a specific consumer side queue.
3. Unfortunately, once a message is consumed its drained from the queue. You can have multiple instance of the app to consume from the same queue. that works. But you wont be able to replay the message if something goes wrong in your green instance and you want to roll back. You will have to take care of those messages from your app's end.
4. There is a Commit message which the consumer has to send to remove that particular message from the queue. If the consumer died and did not send commit for a particular message, then yes another consumer can re-process the same message
@@TechPrimers Thank you so much.. really appreciate your time. I see some similarities between rabbitMq and AWS SQS.
Not exactly but sort of. If you merge SNS with SQS you can get routing features. :)
If you use Kafka in small org your financial situation will be like going into icu repair
We see a lot of comparison videos but most are very shallow and focused on jargons.
This is one of the videos focused on practical use cases. Nicely done!
I was searching about the difference at so many places, but you made the search a full stop. This is no concrete and to the point Explanation. Thanks a lot mate. Subscribed for more. Thanks again , this take a lot of efforts to make such content, much appriciated . :)
Appreciate the video :D
I think for event distribution in case the events are decoupled you can create two separate topics in KAFKA
1. TOPIC_EVENT_1
2. TOPIC_EVENT_2
Once created, you can create two consumer groups
1. CONSUMER_TOPIC_EVENT_1_AND_2
2. CONSUMER_TOPIC_EVENT_2
Hence, you can add first consumer group to both 1st and 2nd topics, second to only the 2nd topic
In the example at 6:05, you mentioned Kafka would store different event in different partitions - I guess you meant different Topics?
Yeah, I guess he meant topic as well. Partitions for a message is decided on basis of the key provided by the producer and assigned to consumers randomly inside a consumer group.
@@MrXperx Partitions are part of a topic. One topic can have many partitions. If we want to separate events into different partitions for a single topic we can use the event type as the hash key to ensure different event types go into different partitions. This way different consumers of a consumer group can consume different event types from different partitions.
Thanks much for putting efforts to explain things in a simpler way as always. I wish you get more viewers/subscribers that what you have right now. Only one suggestion from my end is please try to post more videos. I know it's not easy as I said, as it takes lot of your time. Thanks again!
Glad you found it useful Nishanth.
Viral content is not why I'm here on youtube. :)
Even if 5 people watch and found it useful and "relevant", thats my success. Thanks for staying connected and providing feedback.
You should see improvements in frequency and quality soon. ;)
Thanks for the detailed video. At 07:57 , Kafka has high throughput and highly scalable. What is the comparison with Mb/sec and Req/sec and also what makes RabbitMQ having less throughput is it handling of new messages or storage layer?
Liked. Well done, and well spoken. Question about your last point on the Kafka limitations about producer coordination regarding partitions. In my experience, producers produce to a topic, not to a partition. And, Kafka evenly distributes the messages across the partitions within the topic. In that scenario, producers don't know or care about how many partitions there are. Are you referring to a mode where producers specify or know about the partitions?
You've an excellent understanding Sir! Thank you for simplifying this
I have not worked on RabitMQ just have done a few pocs, but Kafka hands down is the most powerful tool i have seen in my career. Spring gives out of the box integration with Kafka and u can plan ur fill application on these two monsters. Kudos !!! to u dude as always :-)
Yes can’t agree more. Unknown fact: RabbitMQ also is owned by Pivotal which own spring 😁
definitely a good comparision for a beginner to understand . thank you very much for your efforts.
Glad it was helpful!
About the coupling n decoupling part...how about using consumer groups?
very well explained. You are really doing a wonderful job. These videos help a lot in understanding the concepts. I wish you good health and lots of success.
Thank you abhilash. Glad it's helpful
Thanks man! concisely put.
and what is better with Kafka, 1rst or 2nnd option?
I think the example for RabbitMQ in the 10:40 minute is not right. You mentioned that "Sales & Inventory" sends the "order.cancelled" and can be consumed by "billing" and "fulfillment". But, isn't that RabbitMQ cannot replay messages? My understanding says to me that if, for example, "billing" reads the event order.cancelled, then "fullfullment" will miss the message since is not going to be there anymore.
i also have same doubt unless we use a dead letter queue for it form where replay is possible.
thanks for the video, its exactly what I was looking for.
isnt rabbitmq more suitable for high throughput stuff since it can prioritize messages?
Thank you for your continues efforts to educate others.
My pleasure Hari
Good example ! 👍🏼
Hello! Which one would you use in a case where changes in a specific field in an Oracle database table need to be captured (Change Data Capture) and forwarded to a push notification system like Firebase? Any advices will be appreciated. Thank you.
the point what i am searching for 3:22 to 3:44 thank you!...
Very nice explaination
Thanks nice video. Easy to understand.
Kafka has topics and topics have configurable amount of partitions, not sure what the video said regarding partition wrapping events, sounds technically incorrect. Also rabbit mq can be durable.
Thanks for the nice video! One thing is not clear, what do you mean by "streaming" in kafka? Can you show some messages in action with and without streaming? TCP/IP always streams data, so what do you mean by streaming in kafka? Also RabbitMQ also support pub/sub so the pub/sub example you showed can fit to both the tools.
Nice video sir, with spring cloud stream we can configure in a generic way so that we can change from rabbitmq to Kafka and vice-versa.
#Mediator 😁
what about spring cloud stream?
@@blasttrash oops it's cloud stream only...
Excellent presentation
Very good background. However I think it comes to the scenario you presented (good choice of scenario), the options presented are NOT close to real world scenarios being implemented. If someone is doing that, they don't understand the cost and technical implications from choosing either Kafka option 1 and option2. ... It would be good if you can dig deeper into the situations where there is combination, failures, and you are not offloading everything into Kafka.
Regarding RabbitMQ limitation. RabbitMQ introduced Quorum Queue for streaming
Can Kafka be configured to act like rabbitmq, i.e consuming event only once and will never re-consume previous events?
Yes, google "kafka exact once consumer"
No it cannot. Apache Pulsar would be a better option for this
In 6:02 you can't guarantee which event goes to which partition in Kafka, so you may also have event 1 and event 2 in partition 2
The default is round-robin fashion when key is not set, once can use specific client implementation that sets the partition, so it is not possible to do it via producer-console
Thanks for the very well summed up video. Here are few queries for me
1.Since kafka is cluster, we have high availability.Is there a way of having high availability for Queue?
2.If kafka is down for a while,even with 1 broker,it can get all messages from the saved files of local system ,is it possible to recover mssgs if quqe is down for a while ?
RabbitMQ also supports clusters for HA.
Rabbit MQ also has persistence. So if a broker goes down and comes back up it can pick up the messages from the persistent store and start processing them.
Need little more info. Where you said rabbit mq has smart broker and Kafka does not any rules at the broker
If you see the example i explained in between, where app2 can have separate queue with its own event topics. That’s what i mean by smart broker with routing rules
Thank you.
Does RabbitMQ have some form of persistence?
Why can't we use RabbitMQ for a streaming use case?
I still can't find any reason to not use Kafka. Not that I'm biased towards it, I'm actually trying to find a use case that Kafka doesn't fit but other message queues do. But the reasons listed here and elsewhere don't satisfy me.
Thank you !!
My instinct tells me for the sample study case, Kafka option 1 is the best. Agree?
Can you please share the PPT
In a scenario where we have multiple publishers and a single consumer (2type of events only) what is your suggestion? Kafka or rabbitMQ or neither one just something like grpc.
I am researching to choose a good tech for Log gathering system. we have a lot of loggers and the server should store all logs in the database.
For log gathering, kafka is the best choice. Infact kafka got created as a steaming log platform
How are topics in Kafka different from Queues in RabbitMq?
In theory both are more or less the same. However, Queues once drained, you cannot get the messages back. But in Kafka, you can get the messages if you have them retained. Kafka uses consumer groups to differentiate different consumers. RabbitMQ uses Queues to segregate consumers. This mean each consumer can get their own copy of messages (in both these platforms)
Nice video..We have one use case of using both queue and message streaming Platform..Since Kafka provides smart consumer and doesn't provide queue capability..so we are in process of deciding azure event hub for streaming and azure service bus for queue..But we are also thinking for an option of apache pulsar since it it heavy and not widely used,,we are not thinking for it..What is your suggestions for this use case please? Highly appreciate your response..
Thanks for sharing Suhas. Can you elaborate more on why you need Streaming/Queue both? Any specific reason/requirement?
@@TechPrimers Thanks for the response.. We need Kafka for high throughput requirements and we need queue to have large no queue for customer specific.
Can you plz make a video on what the meaning of streams
Yes Mani, I did it already. Check it out here - ua-cam.com/video/A3Mvy8WMk04/v-deo.html
Great video,
I had a question, how would you decide between choosing rabbitmq with multiple queues vs rabbitmq with direct exchange (where we can specify routing keys)?
Rabbit is mostly used for doing async services inside the microservice ecosistem (same subnet or namespace in kubernetes) with multiple queues. Between ecosistems (different subnets) is where rabbitmq topics are used. Remember inside the same ecosistem using topics increase the amount of compute you need and the risk of crash
Thnx bro for ur efforts to help us with latest trends in technologies.
Would be great to have some career based video in IT.
♥️♥️♥️♥️♥️Luv ur work... Keep it up bro.
I'm just another developer like you Ashutosh. :)
If i create career based videos, it will digress from the channel's core ideology of learning new Technologies.
Appreciate you requesting it.
Agree bro !
What’s the easiest I
One? The most human friendly? That’s all I care about.
Rabbit can also be configured to be durable... don't know why you ignored that?
Thank you very much....
Thanks for your effort creating and sharing this video. Why do you say that RabbitMQ only works with push based approach ?, as far as I know RabbitMQ supports pull based approach too.
Great.
RabbitMq support both pull and pushed
I liked this video not only because it is straight to the point made, but also because there were 666 likes and this number needs to be broken...
Sorry your explanation does not go the depth of the use cases. For example what do you mean by Rabbit MQ is pub sub kind of model 1:18, then u say used for point to point communication and request response. If it is pub sub it is not point to point, it is publisher publishing to a specific end point with many subscribers consuming from it. So u need to elaborate on than that before moving on.
Similarly u need to explain how smart broker works for routing the traffic in Rabbit MQ. Kafka has topics, and brokers, so what do brokers do in KAFKA, they do not route the messages/traffic.
Somehow my thinking does not go beyond kafka
Opinionated
At 6min, I believe consumer cant tell kafka I read messages only from specific partition.
You said the word "LEVERAGE" so many times. you almost LEVERAGED everything. nice video though 👍
👍
Rabbit MQ should not be used. It has concept of back pressure. My organization migrated all the things from RMQ to Kafka.
🥰
Is there any "no indian education filter" in UA-cam search?
Not correct
first to like comment
Bhai agar padhna hota to pdfs se padh lete thoda samjhao acche se