How LinkedIn improved their latency by 60%
Вставка
- Опубліковано 27 чер 2024
- System Design for SDE-2 and above: arpitbhayani.me/masterclass
System Design for Beginners: arpitbhayani.me/sys-design
Redis Internals: arpitbhayani.me/redis
Build Your Own Redis / DNS / BitTorrent / SQLite - with CodeCrafters.
Sign up and get 40% off - app.codecrafters.io/join?via=...
Recommended videos and playlists
If you liked this video, you will find the following videos and playlists helpful
System Design: • PostgreSQL connection ...
Designing Microservices: • Advantages of adopting...
Database Engineering: • How nested loop, hash,...
Concurrency In-depth: • How to write efficient...
Research paper dissections: • The Google File System...
Outage Dissections: • Dissecting GitHub Outa...
Hash Table Internals: • Internal Structure of ...
BitTorrent Internals: • Introduction to BitTor...
Things you will find amusing
Knowledge Base: arpitbhayani.me/knowledge-base
Bookshelf: arpitbhayani.me/bookshelf
Papershelf: arpitbhayani.me/papershelf
Other socials
I keep writing and sharing my practical experience and learnings every day, so if you resonate then follow along. I keep it no fluff.
LinkedIn: / arpitbhayani
Twitter: / arpit_bhayani
Weekly Newsletter: arpit.substack.com
Thank you for watching and supporting! it means a ton.
I am on a mission to bring out the best engineering stories from around the world and make you all fall in
love with engineering. If you resonate with this then follow along, I always keep it no-fluff. - Наука та технологія
Quality of content is insane, I have read limitation of JSON but never understood it as I did now. Big Thanks
It’s simple change but you make it sound like a rocket science.
His channel is nothing but showing engineering blogs, bro has only opened his channel to sell "cohort" courses which takes 40+ k.
And there people only learn shit. This guy cant even code and just builds redis and bloom filters.
BC ko itna v nhi pata ki youtube pr b redis from scratch videos already hain.
Most brilliant ideas aren’t huge changes but small incremental change
Pareto principle states that 80% of the results comes from 20% of actions.
:). nailed it with comment
Yeah. Very Simple to copy. yes. But the engineering and thought behind?
Hi Arpit! Lovely video. Got a question for you, wrt the above example - "123456789" is consuming 9 bytes, and apart from that transferring data through json is a few more additional bits including the colon etc. Protobuf being one way of improving efficiency, but i believe csv would perhaps also be more efficient than json? We can eliminate the use of inverted commas, keys. Just return values seperated by commas and at the client end regex remove the comma. What do you think of this?
Thank you for doing this!
Amaing. I have one question to ask Will serilization and deserialization not add any latency as client library will get the request in serialize format it needs to be deserialized.
Thanksss a lot...Such quality content!!!
Brilliantly explained !! just like protobuf, info is efficiently packed ;) Thanks a lot :)
Awesome video Arpit :) Subscribed
does google protocol buffers also reduces huge json sizes to be stored in db like mongo where there is a standard limit of 16 mb?
Quality content, thanks for this ❤
Hey Arpit Can you please share the document ? for a quick revision if needed in future
Hi Arpit, thanks for informational video as always.
I have read that protobuf are generally used in interservice communication and not browser facing APIs, because they don't provide significant improvements over JSON in javascript based environment. Am I right or wrong?
Yes, but this is not due to JSON. We do not see a substantial improvement because the latency of transmitting network over the internet is so high that cost of serialisation is miniscule in front of it.
Hence, spending a few extra cycles does not matter much and hence Protobufs are typically preferred for inter-service communication within the private infra/vpc.
@@AsliEngineering Thank you
Good learning
Quality and Clarity❤🔥❤🔥
Service mesh (istio/linkerd) solves this out of the box. Not to mention it’s language agnostic and decoupling service to service communication from app code. Library based framework is a thing of the past.
As per my understanding, Service Mesh like Istio make sure the communication between services in such a way that it can be trackable. But, it doesn't reduce the response time and doesn't play any role in reducing the latency.
LinkedIn might have not wanted to spin up more than the required pods for don't know how many appliations..😅
To reduce the cloud cost, Organisation should also focus on the tech and library which they have been using at code level. We as a developer should not always rely on cloud infrastructure..🙂
Marely managing & tracking traffic better doesn’t solve the performance problem.
Also, linkedin is probably already using a properly configured service mesh.
You mentioned that one of the possible solutions could have been to compress the data and transmit. But isn't the data compressed by default with gzip in HTTP communication?
not by default.
Thanks Arpit.
So if i understood it correctly. We need to make changes at 2 places:
Backend:
Just before sending the response, convert the JSON to protobuf format.
Frontend:
Convert the protobuf back to JSON and then use it.
Please let me know if my understanding is correct or not!
This is done for microservices communication - so backend to backend. Frontend is not yet involved. Their frontend API still talks GraphQL.
I literally couldn't wrap my head around how they did the rollout ? Did the library just ensure if the necessary headers are present and transfer data in the correct format accordingly ?
Did they query an endpoint to see the supported content types ?
There SHOULD BE A SWITCH somewhere which decides on the client whether to send protobuf or json ? Or did they literally put this all in REST.li where client attempts with protobuf and on non support of content type, sends the request again in json ?
Nope, seems like he's talking bollocks.
In-between services you are using protobuff, and on client and main service or gateway you are using json, that's it, so it's standard grpc..
Hey Arpit,
Love the knowledge that you share.
What device and app do you use for note taking?
iPad + GoodNotes.
I believe they reduced response time rather than latency.
How/Where do you find these information? These internal updates, and use cases by the companies? Is there any newsletter, website or channel or something?
nope. I regularly go through company's engineering blogs and publications; and also conferences like VLDB, SIGMOD, and USENIX.
@@AsliEngineering Thank you ✨
I was trying to check the api calls that my client makes when i open linkedin and i was not able to find any calls with protobuf. I'm curious if they still havent rolled out this feature in India, any ideas?
It is between services. Think of it as a service mesh upgrade. The transit between services are more impactful.
Code Crafters Recommendation from you is awesome and Mind blowing😍 .... Thanks a lot bro
Glad you are enjoying it :) nothing beats being hands-on.
@@AsliEngineering ya
Superb
Awesome 😍 , thanks for teaching
Thanks
Thanks 😊. Keep making such informative videos.
Its surprising they didnt do this earlier and were paying high cloud costs till now.
They do excessive diversity hiring. No wonder…
How do you find blogs like these and also papers
I regularly go through company's engineering blogs and publications; and also conferences like VLDB, SIGMOD, and USENIX.
Its just some clicks away 😂😂, Tu v thoda laptop uthale aur search kr le
Good content
A huge thank you for these videos sir!
kanha reduce hua h, pura lag karte rahta h linkedin
If you look at their API calls, they are graphql based and have massive payloads. Takes a ton of time to transmit and process the data. So, still some scope for improvement.
just think if all whole internet adopt this serialization and deserialization (33mbps will go to 80+mbps) xD
Avro could also be used but yes json is slow and inefficient
just like grpc and protobuf2
bhaiya ne uper notes likh rakhe h 🙂
Amazon prime migrate from Microservice to monolithic architecture
boss your talk gives heavy sleep. I don't have to have sleeping pills. 🤣
Arpit you're true gem. You BILLIONS of subscribers ❤
Thank you for all the support. Means a ton :)
Not really
BC ye video ke hisab latency kam ho gae hai, aur actual app me latency is all time high. The difference between theory and actual user experience 😅
it still s'ucks
Enough bro.. stop
Why
Watch hentai, don't come here
Tf. Go watch something else
@@addiegupta. Just because you are an average software fan doesnt mean everyone has to. His channel is nothing but only showing engineering blogs with his iPad.
No coding and nothing. No wonder he was kicked out from Google, this guys can't code shit.