Hello - it's because in organisations systems are evolved from existing systems - since batch is widespread hence it's easy to evolve to a lambda. A Kappa would usually require a new build.
I thikk because Lambda Architecture doesn't completely do away with Batch Processing, it still retains some of its legacy systems and tooling which operate RDBMS which have to be ingested as batch. However, Kappa architecture demands that all data utilise subscribe / notify event-driven systems such as using Kafka Streams. So, the data is always written to Event Streaming platform like Kafka once either ordered or with compaction and read by multiple different consumers based on their own data requirements and necessary view transformation.
Brilliantly explained .. I like youtubers like this who speak useful information to the point 👏
Thank you so much for your feedback 😊
Looking forward to see your channel grow. Great contents so far. Best of luck!!
Thank you so much!
thanks for the explanation. It'd be very useful to include the sources of content
Thanks, noted!
Great explanation!!
Thanks Milena!
Why it is called Lambda and Kappa? Is that abbreviation of meaning?
Great lecture
Glad it was helpful!
Very well explained video
Glad you liked it
How is the implementation simpler/easier for Lambda? - It requires both batch & streaming layers. In the Kappa, only streaming will be present
Hello - it's because in organisations systems are evolved from existing systems - since batch is widespread hence it's easy to evolve to a lambda. A Kappa would usually require a new build.
Kappa architecture is also like the streaming layer + serving layer as in lambda architecture. How it is complex to implement?
I thikk because Lambda Architecture doesn't completely do away with Batch Processing, it still retains some of its legacy systems and tooling which operate RDBMS which have to be ingested as batch. However, Kappa architecture demands that all data utilise subscribe / notify event-driven systems such as using Kafka Streams. So, the data is always written to Event Streaming platform like Kafka once either ordered or with compaction and read by multiple different consumers based on their own data requirements and necessary view transformation.
Awesome sir🎉
Thanks 😊