Best video out there !! I have been searching about the topic a lot , fortunately got this video , so good and easy to grasp the way you explained w diagram and explanation.
Glad to know the video is helpful to you Manoj T! You can check these 3 videos for Schema Registry hands-on -- Introduction to Schema Registry in Kafka | Part 2 ua-cam.com/video/46aZ6dqvswk/v-deo.html Using Glue schema registry for Apache Kafka Producer with Python ua-cam.com/video/2qKWs5_g8hU/v-deo.html Integration of AWS Glue Schema Registry & Kafka Consumer using Python ua-cam.com/video/q7XFcfE_TJ0/v-deo.html Happy Learning
Hi, May I know schema registry can we implement using windows as well. where can we get files, schema-registry-run-class and schema-registry-start files for windows.
Sir I'm going to create one poc using lamda,redshift ,glue ,s3 so I want to generate huge ammount of data for batch as well as streaming can you please make video how we can generate huge ammount of data
Hello Rahul kakade15, noted in backlog , in the meantime , you can try with Amazon Customer Reviews Dataset (s3.amazonaws.com/amazon-reviews-pds/readme.html) or you can generate fake date from this website -- www.mockaroo.com/
Really great explanation. Could you please make some video around subject name schema strategy when we have to support multiple schemas under one topic using python code examples?
Nice video . One question though .. If schema validation is already happening at producer level . What are the chances or in which scenario does the schema validation can fail at consumer side ?
Ideally it's not possible until and unless someone change the schema & the compatibility type in schema registry manually post the message published in kafka topic 🙂In consumer side schema registry is used for deserialization ...
It is not well explained. Always should be an example of failure. This video makes it hard to imagine when consumers can fail to parse the message. Following the video, it says that the consumer get an ID of the schema from the message itself. So how is it possible that the message will not match the schema if the message will not be published with this ID in case of mismatch in the first place?
Best video out there !! I have been searching about the topic a lot , fortunately got this video , so good and easy to grasp the way you explained w diagram and explanation.
Glad to hear the video is helpful to you @jeetkp6186! Happy Learning
To be honest. I have been searching for a clear explanation on this Schema registry topic in many video. But your explanation was the best of best!
Glad to know the video is helpful to you Manoj T! You can check these 3 videos for Schema Registry hands-on --
Introduction to Schema Registry in Kafka | Part 2
ua-cam.com/video/46aZ6dqvswk/v-deo.html
Using Glue schema registry for Apache Kafka Producer with Python
ua-cam.com/video/2qKWs5_g8hU/v-deo.html
Integration of AWS Glue Schema Registry & Kafka Consumer using Python
ua-cam.com/video/q7XFcfE_TJ0/v-deo.html
Happy Learning
This is exactly what i have been looking to know about Avro SR, Excellent, Thanks.
Glad to hear this @ADayinMyBrain! Happy Learning
This is really very helpful. You have explained it in such a simplified manner. Thank you! 👏
Man, the way you elucidate the topic is pretty amazing. You cleared all my concerns related to this topic. Thanks
Glad to hear that Naman Jain! Happy Learning
@@KnowledgeAmplifier1 I have one doubt in JSONSerializer the serialisation context we use whats the purpose of using that.
Really great explanation and keep up the good work. This really helps.
Thank you so much for your kind words! I'm glad that my explanation was helpful for you, Dinesh Karunanithi. Stay Tuned
Thank you for this video.
You're welcome praveen sb, Happy Learning !
Thank you. You teach well. Best of luck :)
You are welcome Sagar Goswami! Happy Learning
Great, finally understood schema registry in very simple way
Thank hou so much ❤
Glad to hear that
Hi, May I know schema registry can we implement using windows as well. where can we get files, schema-registry-run-class and schema-registry-start files for windows.
Thanks a lot for the video
You are welcome
Thanks for this video
You are welcome babak dorani! Happy Learning
Sir I'm going to create one poc using lamda,redshift ,glue ,s3 so I want to generate huge ammount of data for batch as well as streaming can you please make video how we can generate huge ammount of data
Hello Rahul kakade15, noted in backlog , in the meantime , you can try with Amazon Customer Reviews Dataset (s3.amazonaws.com/amazon-reviews-pds/readme.html) or you can generate fake date from this website -- www.mockaroo.com/
@@KnowledgeAmplifier1 thanka you so much
Really great explanation. Could you please make some video around subject name schema strategy when we have to support multiple schemas under one topic using python code examples?
Where will that topic be saved on the computer?
#KnowledgeAmplifier thanks a lot bro. any vid on CONDUCTOR ?
Hello SpiritOfIndia, I don't have any dedicated video on CONDUCTOR as of now , you can refer this channel -- www.youtube.com/@getconduktor/videos
Nice video . One question though ..
If schema validation is already happening at producer level . What are the chances or in which scenario does the schema validation can fail at consumer side ?
Ideally it's not possible until and unless someone change the schema & the compatibility type in schema registry manually post the message published in kafka topic 🙂In consumer side schema registry is used for deserialization ...
Sure thanks 🙏
It is not well explained. Always should be an example of failure. This video makes it hard to imagine when consumers can fail to parse the message. Following the video, it says that the consumer get an ID of the schema from the message itself. So how is it possible that the message will not match the schema if the message will not be published with this ID in case of mismatch in the first place?
yes I have the same question.