I've been watching videos about stream processing in Kafka for weeks now! and Tim let me tell you is the first speaker whom I was able to understand perfectly to the point of giving me some good inspiration about cool projects in Kafka for my company. Tim, if you ever read this comment, be sure that you inspired someone's professional life to the top. Keep it up and thank you
Even you can replicate the data from kafka queue to your database in case of failure, since the messages in kafaka queue are immutable, so based on the timestamp you can perform manipulations.
Is the Kafka retention policy the only approach to trimming what otherwise seems like a very quickly growing volume of immutable datasets... sorry messages, assuming KSQL doesn't have a DELETE FROM STREAM ... retention policy seems like a very blunt approach. Is there a reference to best practises for managing data retention in a Kafka environment?
This guy is a great speaker
I've been watching videos about stream processing in Kafka for weeks now! and Tim let me tell you is the first speaker whom I was able to understand perfectly to the point of giving me some good inspiration about cool projects in Kafka for my company.
Tim, if you ever read this comment, be sure that you inspired someone's professional life to the top. Keep it up and thank you
One of the Best presentation on Kafka and Kafka streams !! Awesome 👏 .. Thanks 😊
Great presentation ! Thanks Tim !
That presentation is just kafkaesque.
Great talk/presentation simple powerful and realistic
Talented speaker!!
Thanks, very helpful.
kSQL is awesome...
AWESOMEEEEEE!!!
what will be the retention period for ksql tables? Can it be stored permanently?
12:55 Tim mentions 2 important differences with legacy queuing , one is storing the message, whats the other?
The other is that you can have more than one type of consumer reading the same message. A logging consumer and an aggregating consumer for example
Even you can replicate the data from kafka queue to your database in case of failure, since the messages in kafaka queue are immutable, so based on the timestamp you can perform manipulations.
Is the Kafka retention policy the only approach to trimming what otherwise seems like a very quickly growing volume of immutable datasets... sorry messages, assuming KSQL doesn't have a DELETE FROM STREAM ... retention policy seems like a very blunt approach. Is there a reference to best practises for managing data retention in a Kafka environment?
38:28 is where i see the value.. storage - i/o cost is fine but syncing is where the value is ?
Would the data 'of' a select be from the beginning of time or from the time you hit return?
This dude is a Rich Hickey fan
Thought you work for Datastax ...
4 years ago? Somethings might be outdated already.
The partition problem is a very big pain that you have to consider every move down the road
video ID: Lmao, QQQ, it's 4 Q
yes IE6 suckss!!!
Multiple consumer can not consume from the same partition
Make KSQL for Go or I am not using it.