How Atlassian reduced latency by 70% by using Sidecar pattern and when should you use it
Вставка
- Опубліковано 9 лют 2025
- System Design for SDE-2 and above: arpitbhayani.m...
System Design for Beginners: arpitbhayani.m...
Redis Internals: arpitbhayani.m...
Build Your Own Interpreter / Redis / DNS / BitTorrent / SQLite - with CodeCrafters.
Sign up and get 40% off - app.codecrafte...
Recommended videos and playlists
If you liked this video, you will find the following videos and playlists helpful
System Design: • PostgreSQL connection ...
Designing Microservices: • Advantages of adopting...
Database Engineering: • How nested loop, hash,...
Concurrency In-depth: • How to write efficient...
Research paper dissections: • The Google File System...
Outage Dissections: • Dissecting GitHub Outa...
Hash Table Internals: • Internal Structure of ...
BitTorrent Internals: • Introduction to BitTor...
Things you will find amusing
Knowledge Base: arpitbhayani.m...
Bookshelf: arpitbhayani.m...
Papershelf: arpitbhayani.m...
Other socials
I keep writing and sharing my practical experience and learnings every day, so if you resonate then follow along. I keep it no fluff.
LinkedIn: / arpitbhayani
Twitter: / arpit_bhayani
Weekly Newsletter: arpit.substack...
Thank you for watching and supporting! it means a ton.
I am on a mission to bring out the best engineering stories from around the world and make you all fall in
love with engineering. If you resonate with this then follow along, I always keep it no-fluff.
I would be more happy if Atlassian improved JIRA than reducing latency
😂😂😂😂
😂😂😂
keep dropping these insightful videos
thank you very much for making this series i have been learning a lot with each video of yours. things are much simpler to me in terms of design patterns n its implementations
Sidecar also comes with a lot of baggage of observability. When you are a highly available service, having your sidecar comes with the challenge of monitoring it over other teams' infrastructure.
Also, you need to be very mindful of the cpu and memory resources your sidecar will use. Any future release that may have a potential increase in usage, clients will straight away deny the sidecar usage.
In such a case, its best to offer both solutions to the clients.
That's the overhead. You get some, you lose some.
6:24 To protobuf bhi to use kar skte the na? Isn't that what we generally do in microservices architectures?
They would have required a lot of changes on the client side if they would have moved to protobuf (different client, different request-response format, etc). That would have required more effort if there are many callers.
Protobuf is generally used for RPC calls. It is possible that the applications makes calls to sidecar just like it would make calls to the TCS server, which means the teams using TCS will just have to call sidecar instead of refactoring all the HTTP calls.
we recently implmented API tracing for our services using SideCar approach
Thankyou bhaiya. That very nice and easy explanation
Query: earlier client has written the entire logic of calling and interacting with TCS service, using SideCar Module the calling and interacting logic is handled by sidecar and client is simply calling the sidecar API's? is that a correct understanding?
adding to this the sidecar is also doing a bunch of other stuff. to know what else sidecar can do, read about fluentd sidecar.
it has its own mem space because it is a separate process hence does not interfere with actual user facing APIs, etc.
What exactly are those "best practices"? Hard to visualise it when we don't know what exactly is causing the issue.
6:50 I have a doubt here. I don't understand the part why we will have to write the client in the same language as the microservice. Isn't the use of microservice is to establish communication in the general format like JSON, protobuffs, etc.?
If client is already written in some language then to support it, library needs to be in same language. E.g. if your client is written in Java or Go then library should support both of them.
Let's take an example, your application(a webserver) is the client here and TCS is the server. In order to induce best practices to the request being made by your client(webserver) to TCS, we have two options:
1. Implement a library/package and make your webserver use this library to communicate with TCS
2. Create a microservice(we call this a sidecar) and make your webserver communicate with this rather than TCS directly.
If we had gone with the 1st approach, and let's say you wrote your webserver using nodejs, then library has to be in JS. Or let's say Go, then the library has to be written in go language. So rather than maintaining the library to support multiple languages, we could just use sidecar where your webserver sorta makes an API request to the sidecar, and the sidecar will take care of sending requests as TCS expects(be it a RestAPI or gRPC).
If i were to roll out updates for the sidecar, how would that work given that the sidecar is used by lets say 100 of services?
But every client service has to run that service in the same server . Is nt that extra dependency?
Yes, that's a fair trade off i would say.
What about the deployments to side car.. If there is any change, how that would be deployed on all
Individual services pull and reload. Same thing would be required if they would have opted for clients.
Couldn’t understand what reduction in overall req for TCS is. Do you mean there is good cache implementation done because of which the number of requests for TCS is reduced?
As written in his notes, it's due to long lived caches.
Question.
So every client has a different sidecar written ? Library is language agnostic hence they setup in ec2 of every client microservice. Is it right if we say all clients are already using http protocol and same sidecar is just setup with all clients ?
No, same sidecar is used, as in one for each client sitting in ec2.
Not sure if this answer your question.
@tapank415 every client microservice must have a http library being used. Otherwise the sidecar needs to change as per the protocol.
Nice one arpit
Today learned a new thing "sidecar" which we can use for observability
One doubt that the sidecar has the best practices(code) written by TCS team and other teams are using them to interact with TCS am i right
So is sidecar a tool or pattern ?
If it's a tool, how is it language agnostic ?
Isn't it essentially a language agnostic library?
its not a language agnostic library,its an architectural design for the microservices,
Sidecar is not a tool. It's just a pattern. You write another program and make run along with the service in same server.
Couldn't find the atlassian link in desc
check desc.
Great explanation of a simple yet powerful pattern.
One follow-up here is: Is this sidecar process that runs on the same machine acts as a caching layer for the client app(Jira/Bitbucket)?
when we say they are not following best practices, what best practices are talking about sir ?
There are 3 questions
1. what mistakes were made by TCS client, i mean what is good practices which client can miss.
2. Is it one sidecar server for all instance of a client then this sidecar can become bottleneck if for client if all its instance request going through 1 side car
3. Is each instance have 1 sidecar, then how we make sure each instance have one running sidecar server while scalling up automatically ?
1. As mentioned in the video long lived cache , cache invalidation , parallel calls to TCS , these were the things missed by the client
2. For 2 and 3 as per my understanding it should be 1 sidecar for each instance , i.e each instane will have two docker containers (one for main service) (one for side car) , for last part of your question may be it should be part of terraform to add all relevant docker containers while spawnning up a new instance
@@shubhamsawlani2933
For 1:-
if client does not do so( caching, parallel calls), it will end up making more api calls. so more request to TCS server, so server latency will increase for each client not only thin client?
Cant we move sideCar to inside TCS server itself and do those optimisations there ? ( no sidecar server, optimisation moved inside TCS server itself)
Why do you think they didn't improve the TCS instead of adding sidecar
The problem was not with TCS.
Hi Arpit,
Thanks for such videos, it really helps a lot.
Interested to know the sidecar’s low level design details. Was thinking how did they made sure that the client services of TCS don’t repeat the mistake of not following the best practices when consuming the sidecar APIs.
Nice video
“No random boxes” - I am sure you are calling out Gourav Sen 😂
And rightly so..
I wonder how this todo list kinda jira company became so big
We are using this type of pattern for 3rd party API calls.
So we have to create a side car for that 3rd party service, where there is lot of call? Any disadvantages, best practices to follow?
But its only possible when third party service in available for self hosting. Right?
There is definitely potential for reduced latency by using sidecar, but it would have been better had you shared what those so-called best practices are and what is that they didn't do it correctly in the 1st place. But, I wouldn't introduce another process for just calling another http service...it's inefficient use of hardware.
Best practices depend on the use case. Assume your usecase and work out.
🐲/acc