The Database Unbundled: Commit Logs in an Age of Microservices • Tim Berglund • GOTO 2019
Вставка
- Опубліковано 25 лип 2024
- This presentation was recorded at GOTO Copenhagen 2019. #GOTOcon #GOTOcph
gotocph.com
Tim Berglund - Senior Director of Developer Experience at Confluent @tlberglund
ABSTRACT
When you examine the write path of nearly any kind of database, the first thing you find is a commit log: mutations enter the database, and they are stored as immutable events in a queue, only some hundreds of microseconds later to be organized into the various views that the data model demands. Those views can be quite handy-graphs, documents, triples, tables-but they are always derived interpretations of a stream of changes.
Zoom out to systems in the modern enterprise, and you find a suite of microservices, often built on top of a relational database, each reading from some centralized schema, only some thousands of microseconds later to be organized into various views that the application data model demands. Those views can be quite handy, but they are always derived interpretations of a centralized database.
Wait a minute. It seems like we are repeating ourselves.
Microservice architectures provide a robust challenge to the traditional centralized database we have come to understand. In this talk, we’ll explore the notion of unbundling that database, and putting a distributed commit log at the center of our information architecture. As events impinge on our system, we store them in a durable, immutable log (happily provided by Apache Kafka), allowing each microservice to create a derived view of the data according to the needs of its clients.
Event-based integration avoids the now-well-known problems of RPC and database-based service integration, and allow the information architecture [...]
Download slides and read the full abstract here:
gotocph.com/2019/sessions/926...
RECOMMENDED BOOKS
Sam Newman • Monolith to Microservices • amzn.to/2Nml96E
Sam Newman • Building Microservices • amzn.to/3dMPbOs
Ronnie Mitra & Irakli Nadareishvili • Microservices: Up and Running• amzn.to/3c4HmmL Mitra, Nadareishvili, McLarty & Amundsen • Microservice Architecture • amzn.to/3fVNAb0
Chris Richardson • Microservices Patterns • amzn.to/2SOnQ7h
Adam Bellemare • Building Event-Driven Microservices • amzn.to/3yoa7TZ
Dave Farley • Continuous Delivery Pipelines • amzn.to/3hjiE51
/ gotocph
/ goto-
/ gotoconferences
#EventSourcing #ApacheKafka #Microservices
Looking for a unique learning experience?
Attend the next GOTO Conference near you! Get your ticket at gotocon.com
SUBSCRIBE TO OUR CHANNEL - new videos posted almost daily.
ua-cam.com/users/GotoConf... - Наука та технологія
Big fan of Tim Berglund's talks, He doesn't dissapoint with another great one here:
Starting from the Monolith- [3:46] ,
Re-integrating Challenge - [5:34] ,
Integrating Micoservices through a Database [7:39],
Integrating MicroServices via RPC mechanisms [9:46],
Event Driven Microservices [14:56],
Kafka and its components [18:36] ,
Peeling Database Logical Layers[44:01] ,
Microservices as Inside-Out distributed DB [46:57]
The way Tim connects the dots together and delivers his idea is absolutely amazing! Great job!
Awesome talk.
I believe in the last 30 seconds is so much knowledge that the rest of uf will understand in years to come.
I'm a big fan of the podcast and maybe this wasn't tailored to me. Just to simple and watered down. Maybe this is old and just posted recently. Still will upvote because it was well done.
So kafka (a distributed log) should be used as an ESB (enterprise service bus) to integrate/connect microservices? Why would kafka be a better fit than FuseESB or Mulesoft or Apache ServiceMix? Is it because most conventional/traditional ESBs don't have streaming APIs?
Datomic is that database to be shared between all services that could solve the problem described.
Is this a disguised propaganda to keep building better monoliths and stay away from all this madness?? Is this even working in real life when your json schema changes? How to handle a simple error that prevents you to move forward in the log because your data is not consistent anymore...
I did not learn anything new or useful from this talk. After externalize the log, dont you need to build the table in order to create meaningful entity? dont you need to join/merge the entities to consume and serve meaningful information? dont you need to govern the consistent state of the information? Has not the database done all of them so well for you? I am pretty on this: if people can't build monoliths properly, microservices won't help. Neither Kafka.