Want to learn more Big Data Technology courses. You can get lifetime access to our courses on the Udemy platform. Visit the below link for Discounts and Coupon Code. www.learningjournal.guru/courses/
As Albert Einstein once said "if you can not explain something in simple terms, you have not understood it". This tutorial explains Kafka concepts in such simple terms that anybody can understand. Thank you very much for uploading the video! Subscribed and shared with friends!
How did the consumer work in this example as it did not have the latest version of ClickRecord class ? Consumer will not be able to set the newly added properties on the older version of the ClickRecord object which was generated using old schema.
Thank you! I looked to this video and part one of the same topic, but I am stuck since I need to implement a .net solution (in c#) for Kafka/Avro... It looks like confluence does not have a Kafka for windows (only if I use the containers or run some Linux mode...etc.). It would be great if you had some videos on .net / c# on the subject (really poor documented...)
Since you mention confluent, it would be nice to have some sessions on what confluent adds as compared to apache kafka for instance, and also the usage of confluent control center and how that might be useful for managing confluent Kafka services.
In my case, the data to be added into the Kafka topic is a complex object with inheritance. Without versioning, JSON serializer should be enough. What is the best approach to deal with existing java classes with inheritance?
Thanks for putting this question. JSON and AVRO are the most commonly used approaches. However, AVRO do not support inheritance. I checked with JSON and it works.
As the earlier topic mentioned, one partition is taken care of by just one consumer. And in this topic you started 2 consumers, one for old schema and one for new schema. I think they definitely consume different partitions. Then does this mean I need to add new partitions for new schemas changes? I can't let new consumers start working if there is not available partitions for it.
I realize that actually it is not always true that the number of Consumers are the same as the number of partitions... So if the number of Consumers is less than the number of partitions I think add new Consumers are totally fine.
Hi Sir, I am able to produce multiSchema (ex. Product and Customer) messages thru AvroProducer using multiProducer. but I am not getting any API which help us to consume multiSchemas(Product, Customer) messages from same topic.
Hi, You mentioned that the schema id is embedded in the message and the consumer/deserailizer uses it to refer to the appropriate schema from the registry. Q: Can you please tell how/when are the 2 schema registered in the Schema registry
In the method explained in this example, We don't have to register the schema manually. It is taken care by Serializer. So Serializer is responsible for registering the new schema and embedding the schema ID in the message.
Your explanations are outstanding!!! Thank you so much. Sir, While answering one of the questions from the subscribers, you mentioned about a use case "here Kafka Connect pulls data from an RDBMS (Non-Avro) and sink it into HDFS (Avro file)" - is it possible to for you to create a video for this use showing steps from Producer (RDBMS DB) --> Kafka Broker -> Consumer(Hadoop HDFS)?
I started off with one video it was so informative I ended up going thru all videos (subscribed ofcourse). Great way to present things. I have a question Is it possible to convert string/json in kafka to be deserialized to avro by consumer ? string or json (non-avro) writing apps -----> serialize ---> bytes ==> KAFKA ==> bytes ---> deserializer --> avro ? Have a suggestion as well .. Can you make a video tutorial schema registry and/or avro data ?
Thanks for your response. I will search for it. btw do you think this is possible string or json (non-avro) writing apps -----> serialize ---> bytes ==> KAFKA ==> bytes ---> deserializer --> avro ?
Yes, it is possible. However, what are you going to do with the Avro object in the end? I guess you want to store it in Hadoop or some other place. If that's what you are aiming, Kafka Connect is the most suitable solution. I can see an analogy with a Use Case where Kafka Connect pulls data from an RDBMS (Non-Avro) and sink it into HDFS (Avro file).
Want to learn more Big Data Technology courses. You can get lifetime access to our courses on the Udemy platform. Visit the below link for Discounts and Coupon Code.
www.learningjournal.guru/courses/
As Albert Einstein once said "if you can not explain something in simple terms, you have not understood it". This tutorial explains Kafka concepts in such simple terms that anybody can understand. Thank you very much for uploading the video! Subscribed and shared with friends!
great
How did the consumer work in this example as it did not have the latest version of ClickRecord class ?
Consumer will not be able to set the newly added properties on the older version of the ClickRecord object which was generated using old schema.
Thank you! It's very easy to understand what you teach.
Thank you!
I looked to this video and part one of the same topic, but I am stuck since I need to implement a .net solution (in c#) for Kafka/Avro... It looks like confluence does not have a Kafka for windows (only if I use the containers or run some Linux mode...etc.). It would be great if you had some videos on .net / c# on the subject (really poor documented...)
Since you mention confluent, it would be nice to have some sessions on what confluent adds as compared to apache kafka for instance, and also the usage of confluent control center and how that might be useful for managing confluent Kafka services.
In my case, the data to be added into the Kafka topic is a complex object with inheritance. Without versioning, JSON serializer should be enough. What is the best approach to deal with existing java classes with inheritance?
Thanks for putting this question. JSON and AVRO are the most commonly used approaches. However, AVRO do not support inheritance. I checked with JSON and it works.
Do we need Confluent Kafka for schema registry? How do we achieve it with Apache kafka?
Do we need to create a new consumer and producer every time the schema changes???
As the earlier topic mentioned, one partition is taken care of by just one consumer. And in this topic you started 2 consumers, one for old schema and one for new schema. I think they definitely consume different partitions. Then does this mean I need to add new partitions for new schemas changes? I can't let new consumers start working if there is not available partitions for it.
I realize that actually it is not always true that the number of Consumers are the same as the number of partitions... So if the number of Consumers is less than the number of partitions I think add new Consumers are totally fine.
Hi Sir,
I am able to produce multiSchema (ex. Product and Customer) messages thru AvroProducer using multiProducer. but I am not getting any API which help us to consume multiSchemas(Product, Customer) messages from same topic.
Hi, You mentioned that the schema id is embedded in the message and the consumer/deserailizer uses it to refer to the appropriate schema from the registry.
Q: Can you please tell how/when are the 2 schema registered in the Schema registry
In the method explained in this example, We don't have to register the schema manually. It is taken care by Serializer. So Serializer is responsible for registering the new schema and embedding the schema ID in the message.
Your explanations are outstanding!!! Thank you so much.
Sir, While answering one of the questions from the subscribers, you mentioned about a use case "here Kafka Connect pulls data from an RDBMS (Non-Avro) and sink it into HDFS (Avro file)" - is it possible to for you to create a video for this use showing steps from Producer (RDBMS DB) --> Kafka Broker -> Consumer(Hadoop HDFS)?
I will create Kafka connect tutorial shortly.
Thanks!!! May we all know you name? :-)
:-)
@@ScholarNest is there any chance you could do a tutorial on RabbitMQ in the future? great job here, much appreciated
I started off with one video it was so informative I ended up going thru all videos (subscribed ofcourse). Great way to present things. I have a question
Is it possible to convert string/json in kafka to be deserialized to avro by consumer ?
string or json (non-avro) writing apps -----> serialize ---> bytes ==> KAFKA ==> bytes ---> deserializer --> avro ?
Have a suggestion as well .. Can you make a video tutorial schema registry and/or avro data ?
I already have a video on schema registry and Avro data.
Thanks for your response. I will search for it.
btw do you think this is possible
string or json (non-avro) writing apps -----> serialize ---> bytes ==> KAFKA ==> bytes ---> deserializer --> avro ?
Yes, it is possible. However, what are you going to do with the Avro object in the end? I guess you want to store it in Hadoop or some other place. If that's what you are aiming, Kafka Connect is the most suitable solution. I can see an analogy with a Use Case where Kafka Connect pulls data from an RDBMS (Non-Avro) and sink it into HDFS (Avro file).
Sir can we have video on cassandra
Can someone share out the GITHub link that is referred to in this video?
github.com/LearningJournal/ApacheKafkaTutorials