Spring Boot Kafka Tutorial | Mastering Kafka with Spring Boot | Apache Kafka Crash Course
Вставка
- Опубліковано 10 лют 2025
- Welcome to our comprehensive guide to using Apache Kafka with Spring Boot! In this video, we delve into the fundamental concepts and advanced techniques that will empower you to harness the full potential of Kafka in your applications.
We start with the basics, covering what Kafka is and its key components such as producers, consumers, topics, and partitions. You'll learn about the flow of data within Kafka and explore real-world use cases where Kafka shines.
Next, we dive into practical demonstrations. Discover how to create Kafka producers efficiently and send messages both synchronously and asynchronously. Learn the nuances of sending messages with and without keys, and explore strategies for efficient message routing and consumption.
We then explore advanced topics including custom message production and consumption, handling consumer partition rebalancing with multiple consumers, and optimizing offset commitments for reliability.
Throughout the video, we emphasize best practices for error handling and discuss considerations for deploying Kafka applications outside of cloud environments or Kubernetes.
Whether you're new to Kafka or looking to deepen your understanding, this video provides actionable insights and practical examples that will accelerate your journey towards mastering Kafka with Spring Boot.
Don't forget to like, comment, and subscribe for more tutorials on Spring Boot, Kafka, and other cutting-edge technologies!
Github Link: github.com/sha...
docs.spring.io...
START THE KAFKA ENVIRONMENT:
NOTE: Your local environment must have Java 8+ installed.
Apache Kafka can be started using ZooKeeper or KRaft. To get started with either configuration follow one of the sections below but not both.
Kafka with ZooKeeper:
Run the following commands in order to start all services in the correct order:
Start the ZooKeeper service
$ bin/zookeeper-server-start.sh config/zookeeper.properties
Open another terminal session and run:
Start the Kafka broker service
$ bin/kafka-server-start.sh config/server.properties
Once all services have successfully launched, you will have a basic Kafka environment running and ready to use.
Kafka with KRaft:
Kafka can be run using KRaft mode using local scripts and downloaded files or the docker image. Follow one of the sections below but not both to start the kafka server.
Generate a Cluster UUID
$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
Format Log Directories
$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
Start the Kafka Server
$ bin/kafka-server-start.sh config/kraft/server.properties
My Top Playlists:
Spring Boot with Angular : • Spring Boot + Angular
Spring Boot with Docker & Docker Compose : • Spring Boot Docker & D...
Spring Boot with Kubernetes : • Spring Boot Docker Kub...
Spring Boot with AWS : • Spring Boot + AWS
Spring Boot with Azure : • Spring Boot Azure
Spring Data with Redis : • Spring Data with Redis
Spring Boot with Apache Kafka : • Apache kafka
Spring Boot with Resilience4J : • SpringBoot Resilience4j
Video at point 8:24 saying, consumer can not read from multiple partition, but in video at 8:28 is showing 1 consumer can read from multiple partition.pls clarfy😮
Nice video❤
Thank you 👍
Thanks
so what will be message flow in this scenario -
when we have 4 partition and single consumer A, 4 message produces, 4 message consume by single consumer A, now another consumer B got connected with that broker, how rebalancing will happened, from where means from beginning or latest , B will consume the message
If Consumer A consumed all 4 messages from partitions 0, 1, 2, and 3, and then Consumer B joins:
After the rebalance, Consumer A will continue consuming from its newly assigned partitions, and Consumer B will start consuming from the latest position (the offsets already committed by Consumer A) for its newly assigned partitions.
The exact starting point for Consumer B is determined by:
If Consumer B is in the same consumer group as Consumer A, it will start from the latest committed offset for the partitions assigned to it.
If Consumer B is a completely new consumer group (not part of the same group as A), it will consume from the latest offset or from the earliest offset, depending on the consumer group’s configuration (this is controlled by the auto.offset.reset setting, which can be set to earliest or latest)
@@technotowntechie9732 Thank u Sir :)
How it will work if we have multiple instances of same producer messages will be send duplicate right
While having multiple producer instances can lead to potential duplicates if not managed properly, Kafka's idempotent producer feature and proper configuration can help mitigate this risk.
hi do you have ppt please?
Udemy course about Kafka