We use gRPC in our multi language deployment and when we switched to it we immediately knew it was the right choice. The continuous connection you set up is amazing and super easy to use.
We use gRPC because of performance intra cluster and ‘near real time replication’ - there are a few videos of sending 10,000 messages in like 2 seconds with gRPC - for our testing we did cloud to cloud ‘drag racing’ of same message in gRPC against rest - using Python and the hands down winner was gRPC Another reason for our project is we are also making an sdk and are supporting multiple languages and wanted that ‘hard contract’ of the proto It was a learning curve for sure But now we talking about good enough, long term, growth of the app- because we have more tools it drives us to better understand the business needs…. Many times, the business ‘right’ does NOT mean the ‘best’ technology GRPC, rest, graphql…. are tools, not hills to die upon :-) Great overview and look forward to every video!
there are also projects that saves 2 billion of records in SQL and NO SQL database, but what it really let say hurts with grpc is the DSL both sides (client and server) and nothing more, with REST u can achieve it with exposing it
@@metaltoad8462when you say DSL you mean the proto file? if it'a a private api and you're not sharing your repo to the outside, I fail to see what is exposed? I mean, the HTTP payload is binary, and even if one decrypted the HTTP packet, one can't make heads or tails of it without the proto definition and an inspector tool, unlike the JSON in REST. I'm probably misunderstanding your comment so do elaborate if you would what you meant.
Storing your proto files in a separate repository is the way to go. This way you can generate language-specific packages that represent the grpc interface and treat them as versioned dependencies in your separate client and server projects, which simply provide the implementation as a server or client usage. Having services maintain their own proto files and generating them as part of build processes, can introduce circular dependencies. If for instance two servers communicate with each other acting as both clients and servers of eachother, they could effectively cause a circular upgrade flow whereby you update server A, then update server B's A client which may have code ahead of server A's B client, resulting in updating that client as well. gRPC is less suseptible to that because it's standards suggest you should not make breaking protobuf schema changes, but as a team I'm dealing with this right now as we are using the more relaxed OpenAPI ReST approach.
For the where to store the proto files question, one solution can be to create a client in the same repo and expose as a library so users of your service can easily use it. Requires more work but makes the life of everyone a lot easier.
@raulmatiasgallardo it is actually worse to use. Because in REST you know full schema but in gRPC, not always the case. gRPC in fact, is better when you want to give both client and server.
This video is really well made. Explanation is clear and easy to follow. One of the best I’ve seen. Nice background ambience, narration and editing. Subscribed!
The grpc is generally works well. Our team use it in many services. But my main issue with grpc is "protobuf" itself. There are many limitations imposed by protobufs. For example, you can't repeated fields inside "oneof"; not all types supported by the maps as a key; no UUID support; no support for standard time types (at least on JVM). This leads to making duplicate model definitions, writing conversion logic and keeping them in-sync. So, my recommendation is to limit grpc/protobufs usage to where it is critical to have high throughput/latency only and use REST for everything else.
I have found a cool solution to have a common application interface between two applications using REST. I did it with FastAPI and a NextJS application. I wrote a CLI using typer, this command would take the FastAPI application, generate a swagger file, inject the swagger file into the NextJS application project then use the openapi-generator-cli package to generate an SDK for the NextJS application. And here you go, you just define endpoints using FastAPI endpoints with Models and you get an SDK with functions to access the endpoints and interfaces how to communicate.
I use grpclib for my grpc projects with Pythona and it helps generate the .pyi file with the type annotations. For JavaScript/TypeScript I use either nice-grpc & ts-proto or malijs for grpc. I hardly use the main grpc libraries.
I generate the code in the Python build (to avoid checking in the generated code), and have the build system require mypy-protobuf. Then, in the build script - e.g. “hatch_build.py” - make sure to provide protoc the argument “--mypy_out=.” to generate the typed PYI files. Im not offering a vote of confidence - I’ve found the entire protobuf ecosystem to be crusty, and likely overkill unless you really know what you’re optimizing for.
(restafarian enters the room) gRPC is not an REST replacement but maybe a http+json replacement. to implement rest, it requires so many things. Some I consider important: - url or similar: you don't have a standard way to locate a method call - semantics. no standard semantics for retrieving, independent state change. in grpc, everything is the equivalent of a post, so middleware mostly impossible. (no url, also no caching middleware) - the idl is the thing that could be use to deliver form data, but only static (shape), not predefined values and similiar
Middleware is possible if you have the same idl specification. The real difference is http defines a sort of envelope type that intermediaries can conform to without knowing inner structure, but knowing something about semantics (e.g., GET is more likely to be safely cached).
Wait! You cannot be a true RESTafarian because you did not call out his lack of mentioning HATEOAS! 😄 BTW, any idea who coined the term RESTafarian, and when? 🤔
@@gearboxworks 😂true. The word is missing, yet a reference to the forms is there ;) Regarding the RESTafarian, no idea, but I really like the term because I also like reggea music 🇯🇲
Keeping all microservices isn’t a monolith, it’s a monorepo. In my experience that’s the best approach, even without gRPC. It’s way easier to manage versioning. Monorepos are way more frictionless than multi repo services.
I used gRPC to send pictures from client app to image recognition model hosted on another machine and it was 10 times faster than REST for such big files like pictures
I was under the impression gRPC uses protobuf, which uses binary sterilization . Delivering smaller payload sizes over say json and hence performance, efficiency gains.
JSON serialization is going to be your biggest problem. And likely the cause of any slowdowns you might encounter. JSON is not very optimal for binary data. There is no binary or blob data type in JSON, only strings. Strings in JSON has to be UTF-8, so no binary data. Not only you need to now encode the binary data to make it valid UTF-8, you also have to encode that again into JSON. BSON would actually support byte arrays and would be more suitable for such use case. This has really nothing to do with REST. Also REST does not state that your payload must be JSON. You could in fact just serve the image, tar or zip etc. Or even use the protobuf with REST just like gRPC is doing. But I'm not sure if that is great idea given the use case of REST, kind of defeats the point.
If you want a monorepo, you can keep the proto and all services in a single repository (a.k.a monorepo). If you want multiple repositories, putting the proto in a separate repository is the way to go. Either way, you should keep the proto changes compatible.
I see some people already mentioned OpenAPI. For an like-to-like comparison you probably should have used that. That immediately shows you that REST couples you just as tightly, it's just that most people don't write out the API and thus create the impression of decoupling. So, concluding: there is not as much difference as it seems, but people can ignore the specification of the contract with REST, which indeed makes it simpler to start with. So, are you resource oriented or action oriented? That should be the question.
I think that gRPC would make sense for large enterprises where a single request from an end user causes a cascade of microservice calls. The performance boost and strict boundaries would make sense for this scenario, and the additional benefit of lowering computational costs at scale will also be pretty significant (iirc json encoding/decoding takes 20-30% more CPU than protobuf packing/unpacking).
The issue isn't the tecnology, but rather the surface area between the applications using RPC for communication. Never expose your domain object, that's a trash pattern that's going to make need to change your endpoints everytime something changes inside. Only relay what's vital to the endgoal!
Yeah, every now and then I still get to maintain the crappy mess I built on it back then. I really don't have nostalgic feelings about the code I wrote in my younger days :/
you should look into json transcoding. i would never implement a rest service ever again. gRPC is just so much more convenient. and about the distribution of the proto files, i always have a dedicated project/folder that deploy packages each technology's package repo
At Google, protos are used for almost all internal communication between services. "Where should the .proto file be?" is solved by a monorepo that holds code for vast majority of projects. It would be present in a directory that belongs to the service exporting the interface and importable by anybody that needs it.
We switched from SOAP to gRPC because we needed the tight coupling with contracts but also wanted better backward/forward-compatibility which you can more easily achieve with gRPC if you just follow some rules while creating/altering your proto files.
On where to store the proto, I store the proto files in a repo and also the generated codes in another and expose the generated code as a library that can be imported by anyone and be used. The process could be automated with some pipelines to generate from the latest proto files and it can be versioned for an easy version tracking.
Thank you for this excellent explanation! You manage to explain concepts in a very understandable way without assuming too much but still transport a lot of experience 👌
so with OpenAPI (former swagger API) you can define a contract for a REST interface with a yaml file an also generate client or server code. There are also clients which can show those yaml files very readable and also wich clickable demos.
Not a monolith. A monorepo. The way Google wants to maintain their stuff. You see a lot of things in gRPC that indicates that Google style of doing things. I also had to search for a package that compiled proto to typescript and wasn't a horrible commonjs mess that the default protoc tool outputs. gRPC is really not made for browser apps.
I use nice-grpc & ts-proto for my TypeScript project. ts-proto will generate the necessary TypeScript code from your .proto files and you can specify the js version you want too.
NCBI has been successfully using gRPC in its backends for several years. There are a number of high throughput, low latency use cases (the Datasets product comes to mind but I've seen it in others.) And the compiled contract helps keep multiple teams in sync and make API transitions explicit. But it's not a silver bullet. APIs NCBI exposes to the outside world are usually "REST."
One of our team had an issue because of gRPC sticky connection. The team didn't has a good way to load balancing gRPC connections from the client side and kept using the same connection to send multiple urany calls to the same service. So, they kept spamming requests to the same service instance, even though the service had spawned a new instance to help handle extra loads. They unnecessarily overloaded the service and degraded their own service as well.
Just one thing: JSON might be used a lot with REST interfaces, but the representation of the data is actually irrelevant. You could, if you wanted, use Protobufs, Avro, Thrift, or any other serialisation mechanisms you likes. Much of the apparent overhead of REST is actually marshalling overhead, and using representations that minimise/eliminate that overhead important when it comes to performance. JSON just ends up being the lazy least common denominator, even when it's more appropriate.
We use grpc in embedded device that is low powered but has strict requirement on performance. When there is a trigger multitide of services runs at the same time, and it has to produce result in preferably 1 second. So far we are happy with it. But talking to outside world is still REST and Websocket.
Chances are, you aren't actually using REST.... Unless you are full OData. gRPC also handles aspects of contract management between services, forcing you to define the API layer and generating the clients and server stubs, which I find to be of high value. Web APIs if you don't use Open API + versioning or OData can become cumbersome to maintain, but that's already more work than gRPC for most languages.
I had implemented a gRPC, and switched it back to REST. It was just too opinionated on authentication. Either you use SSL or have no authentication at all, and the Python implementation apparently has broken features regarding root certificates. So if you want to use gRPC with Python, you have to pay a third party SSL certificate provider if you want anything other than a completely open gRPC server
While browsers support HTTP2, there's no way to write custom HTTP2 with trailing headers as required by gRPC, and allowed by the standard. So instead, people use a special proxy that will accept requests in a different way and then forward it as a proper request to your gRPC server. In general, gRPC goes beyond HTTP2 and I've seen some using it over many types of transport.
At 15:40 you said that there's no typing support in REST. When you work with FastAPI + pydantic as you do in the example, you will have the `openapi.json` interface autogenerated for you which includes all names, return values and types. You can use this openapi.json file as input for (for example) the openapi-typescript-codegen typescript generator. So you will have a typed version of your python REST API in your typescript app (you just need to run the automatic code generator). Problem solved.
gRPC sounds very promising for efficient communication between services, especially with features like streaming and strong typing. Have you considered how OTP's native mechanisms, such as GenServer, could be leveraged for similar goals? In certain cases, it might offer even tighter integration and fault-tolerance within the BEAM ecosystem. I'd love to hear your thoughts on where these approaches might complement or diverge.
gRPC is probably preferred if Your domain doesn't map clearly to a "CRUD model" - let's say it isn't saved in a persistent layer, or calling remote pure functions. As always, both REST and gRPC could be used for example "creating user" or "return a random number" as well. My 2 cents.
A distributed system doesn't do RPC at all (except in some specific case) otherwise is not distributed. It's just a way to increase employment rate for software engineers
Not gRPC but custom rpc as internal api server could have more advantages than rest, because sometimes functions have to be just functions… I think gRPC could complicate things unnecessarily
gRPC is not about solely performance gains. Exposing a function through gRPC is a cakewalk. In REST, you end up in a bikeshredding like whether this function should correspond to which HTTP method. Each person has its own understanding of what RESTful is.
Why do people make videos about topics they're completely ignorant about? Why should ANYONE listen to you after you've used gRPC for all of 7 minutes, and don't even discuss the other alternative options in the space?
I had the recent misfortune of having to use a gRPC application and whilst it does have a great deal of precision it was really awful to use. I would say that gRPC ensures absolute compatibility between two different systems and is somewhat language agnostic but for simple tasks is way too overkill. For most things, just like to send a simple message and get a simple reply, and not have to generate an entire communication schema. But when you're dealing with massive, massive systems that are completely interdependent, it can be a useful thing to know precisely what you're getting.
importantly, it's not what you want coming in from the web browser. you need to upload and download files; such as mp4 movies that need ot actually be rendered.
here is a situation where GRPC might work: define the binary protocol between multiple services with a BNF grammar, where they can send messages. You effectively have message-passing state-machines. Define it so that it's never ambiguous about when you must be listening for bytes, vs trying to send them. If you have a message-passing state-machine; then GRPC might not be a mistake. Also, code generation is a pain in the ass for your build pipeline.
14:10 you can keep all the proto files in a repo and include it as git submodule in each of your service repos. but my preference is monorepo approach.
Actually grpc allows two methods of using .proto files. Dynamically or statically. Dynamically: codes are not generated. Statically: the codes are generated for you. I use a library called grpclib when working with grpc in Python instead of the official grpc. I combine grpc and rest. I use rest for the api-gateway while I use grpc for service communication.
We used gRPC for our internal Go microservices. Then we moved to REST and our memory allocations cut to half. Why? Because the structs created by gRPC did not fit our need. So we needed to remap and reallocate each time we passed a struct. Also we did not need streaming. I highly recommend you avoid gRPC or if you think you are an exception, make real-world benchmarks to prove that you are actually seeing any performance improvements.
@MrLotrus well, if you use gRPC, you don't have much choice, since it adds pointer to every single thing. It contains mutexes as well. Which I'm not sure why. But there are other things. For example, the structure of the structs usually don't compile to a very idiomatic Go. Their name is clearly C++ inspired. You also have types that don't have clear equivalent in gRPC. For example UUIDs. So if you use a uuid all over your application, you either have to define a struct in gRPC just for the uuid, or you have to parse strings into UUIDs to make sure that the string that had been passed by is a valid UUID. But the whole point of performance for gRPC is that it has serializes and deserializes better than JSON. But the thing is that you only rip that benefit if the struct that you serializing/deserializing is exactly the same struct that was generated by the Protobuf compiler. When it doesn't, you have to convert and map every field. Which, not only takes a long time, it has a lot of branching and allocations all over it. Doing http json with something like sonic which rips the benefits of SIMD and fiber which tries its best to avoid allocations does wonders. And it doesn't force us to write mappers all the time which means that we have time to better optimize our algorithms as well. Which is a magnitude more effective than the benefits the non-self-describing binary data format that loves putting pointers and mutexes everywhere can do. Although to your point, we have tested the most simple case of gRPC vs the fiber+sonic. It was not as horrible as the real world one, just 10% less allocations, but around 200% slower anyways.
I have to mention that I'm not sure if it was 200% or 400%. And the compiler version sometimes matters. Although newer versions don't always mean "better".
Generally you shouldn’t use the essentially old school Dto’s anywhere outside the service implementation. They carry too much metadata in most applications. You should explicitly map to internal models. But this is also true if you use some of the OpenAPI generators.
@EraYaN yes. That is true. But you see, the generated code by gRPC, is less about the language the libraries are generated themselves, and more about the way you can define the messages on protobuf. The end result is usually unidiomatic, full of needless pointers, and does not map well to how may want your json apis and/or ORMs to represent the data. The generated code is also well organized. You get a huge file with tons of functions and interfaces that don't have any uses for you. And here, in. The necessity for remapping data on every single edge of every single application you get a guaranteed mapping. This is less so with json apis (be they rest or amqp). Why? Because your typical backend definitely has to represent every form of data using json. And your usual definitions are written in the form of the language the end points are generated for. So you only need to remap the data when you need to change the structure of the data. I have to admit I never have done much code generations with OpenAPI definitions. I usually did it the other way around. So I'm not aware of the peculiarities of those generators.
I really don't understand your argument that the services become coupled to each other simply because they are using the generated code. Whenever you use a thirdparty library, like s3 or whatever, you should adapt to it to not couple your code base to it, so i dont understand what you are actually saying. I generate openapi rest spec and you can make the url for other services to generate it too, by fetching the spec for your self. Please please have a single point for your definition/spec. The second you dont have a spec, bugs ensue, purely from deviation from expectation vs reality. But generating code from the template will reduce bugs. Your argument or rest vs grpc, i personally dont see a single difference in outcome of code design. Only one forces you to generate, but you should generate anyway if you can, or use their library if it exists. Or make your own library to interface with the gebrated code, a simple layer. Your just sending data and recieving with a contract at the end of the day from a code design perspective. So idc between grpc and rest. Othe perspectives should make your decision though like team skill, experience, any technical advantages? I dont see many. Id just boil it down to how it fits into the team, the project and organisation. If someone else wants to use your service that is not part of your management...you can have rest and grpc together, to let them choose how they want to call your api.
The opening alone made me quit the video. Stating gRPC, a framework extending the RPC Protocol, is a alternative to REST, a pradigm in software architecture, makes it obvious that you don’t have the needed knowledge to teach about it.
Keeping all microservices isn’t a monolith, it’s a monorepo. In my experience that’s the best approach, even without gRPC. It’s way easier to manage versioning. Monorepos are way more frictionless than multi repo services.
✅ Learn how to build robust and scalable software architecture: arjan.codes/checklist.
We use gRPC in our multi language deployment and when we switched to it we immediately knew it was the right choice. The continuous connection you set up is amazing and super easy to use.
We use gRPC because of performance intra cluster and ‘near real time replication’ - there are a few videos of sending 10,000 messages in like 2 seconds with gRPC - for our testing we did cloud to cloud ‘drag racing’ of same message in gRPC against rest - using Python and the hands down winner was gRPC
Another reason for our project is we are also making an sdk and are supporting multiple languages and wanted that ‘hard contract’ of the proto
It was a learning curve for sure
But now we talking about good enough, long term, growth of the app- because we have more tools it drives us to better understand the business needs….
Many times, the business ‘right’ does NOT mean the ‘best’ technology
GRPC, rest, graphql…. are tools, not hills to die upon :-)
Great overview and look forward to every video!
there are also projects that saves 2 billion of records in SQL and NO SQL database, but what it really let say hurts with grpc is the DSL both sides (client and server) and nothing more, with REST u can achieve it with exposing it
@@metaltoad8462when you say DSL you mean the proto file? if it'a a private api and you're not sharing your repo to the outside, I fail to see what is exposed? I mean, the HTTP payload is binary, and even if one decrypted the HTTP packet, one can't make heads or tails of it without the proto definition and an inspector tool, unlike the JSON in REST. I'm probably misunderstanding your comment so do elaborate if you would what you meant.
Storing your proto files in a separate repository is the way to go. This way you can generate language-specific packages that represent the grpc interface and treat them as versioned dependencies in your separate client and server projects, which simply provide the implementation as a server or client usage.
Having services maintain their own proto files and generating them as part of build processes, can introduce circular dependencies. If for instance two servers communicate with each other acting as both clients and servers of eachother, they could effectively cause a circular upgrade flow whereby you update server A, then update server B's A client which may have code ahead of server A's B client, resulting in updating that client as well.
gRPC is less suseptible to that because it's standards suggest you should not make breaking protobuf schema changes, but as a team I'm dealing with this right now as we are using the more relaxed OpenAPI ReST approach.
For the where to store the proto files question, one solution can be to create a client in the same repo and expose as a library so users of your service can easily use it. Requires more work but makes the life of everyone a lot easier.
@raulmatiasgallardo it is actually worse to use. Because in REST you know full schema but in gRPC, not always the case. gRPC in fact, is better when you want to give both client and server.
This video is really well made. Explanation is clear and easy to follow. One of the best I’ve seen. Nice background ambience, narration and editing. Subscribed!
Glad you liked it!
so glad you've done a video using Go!
I'm a python dev but have been using Go in my spare time and I'm really enjoying it
Amen…
Go is our second language…and every time I’m in it…I love it!!!
I don’t find my self ‘defaulting’ to it yet though
The grpc is generally works well. Our team use it in many services. But my main issue with grpc is "protobuf" itself. There are many limitations imposed by protobufs. For example, you can't repeated fields inside "oneof"; not all types supported by the maps as a key; no UUID support; no support for standard time types (at least on JVM). This leads to making duplicate model definitions, writing conversion logic and keeping them in-sync. So, my recommendation is to limit grpc/protobufs usage to where it is critical to have high throughput/latency only and use REST for everything else.
I have found a cool solution to have a common application interface between two applications using REST. I did it with FastAPI and a NextJS application. I wrote a CLI using typer, this command would take the FastAPI application, generate a swagger file, inject the swagger file into the NextJS application project then use the openapi-generator-cli package to generate an SDK for the NextJS application. And here you go, you just define endpoints using FastAPI endpoints with Models and you get an SDK with functions to access the endpoints and interfaces how to communicate.
You can get the type annotations by generating the .pyi from the proto as well.
I use grpclib for my grpc projects with Pythona and it helps generate the .pyi file with the type annotations.
For JavaScript/TypeScript I use either nice-grpc & ts-proto or malijs for grpc.
I hardly use the main grpc libraries.
I generate the code in the Python build (to avoid checking in the generated code), and have the build system require mypy-protobuf. Then, in the build script - e.g. “hatch_build.py” - make sure to provide protoc the argument “--mypy_out=.” to generate the typed PYI files.
Im not offering a vote of confidence - I’ve found the entire protobuf ecosystem to be crusty, and likely overkill unless you really know what you’re optimizing for.
(restafarian enters the room)
gRPC is not an REST replacement but maybe a http+json replacement.
to implement rest, it requires so many things. Some I consider important:
- url or similar: you don't have a standard way to locate a method call
- semantics. no standard semantics for retrieving, independent state change. in grpc, everything is the equivalent of a post, so middleware mostly impossible. (no url, also no caching middleware)
- the idl is the thing that could be use to deliver form data, but only static (shape), not predefined values and similiar
Middleware is possible if you have the same idl specification. The real difference is http defines a sort of envelope type that intermediaries can conform to without knowing inner structure, but knowing something about semantics (e.g., GET is more likely to be safely cached).
Wait! You cannot be a true RESTafarian because you did not call out his lack of mentioning HATEOAS! 😄
BTW, any idea who coined the term RESTafarian, and when? 🤔
@@gearboxworks 😂true. The word is missing, yet a reference to the forms is there ;)
Regarding the RESTafarian, no idea, but I really like the term because I also like reggea music 🇯🇲
Keeping all microservices isn’t a monolith, it’s a monorepo. In my experience that’s the best approach, even without gRPC. It’s way easier to manage versioning. Monorepos are way more frictionless than multi repo services.
I used gRPC to send pictures from client app to image recognition model hosted on another machine and it was 10 times faster than REST for such big files like pictures
Then you must have done it wrong. gRPC doesn't offer meaningful performance improvement, it only gives better abstract when programming
I was under the impression gRPC uses protobuf, which uses binary sterilization . Delivering smaller payload sizes over say json and hence performance, efficiency gains.
@bigmacbeta you can use that in Rest too.
@bigmacbeta and even that, it is not that efficient. Basically the difference is protobuf doesn't contain the field names and that's it....
JSON serialization is going to be your biggest problem. And likely the cause of any slowdowns you might encounter. JSON is not very optimal for binary data. There is no binary or blob data type in JSON, only strings. Strings in JSON has to be UTF-8, so no binary data. Not only you need to now encode the binary data to make it valid UTF-8, you also have to encode that again into JSON. BSON would actually support byte arrays and would be more suitable for such use case.
This has really nothing to do with REST. Also REST does not state that your payload must be JSON. You could in fact just serve the image, tar or zip etc. Or even use the protobuf with REST just like gRPC is doing. But I'm not sure if that is great idea given the use case of REST, kind of defeats the point.
If you want a monorepo, you can keep the proto and all services in a single repository (a.k.a monorepo). If you want multiple repositories, putting the proto in a separate repository is the way to go. Either way, you should keep the proto changes compatible.
I see some people already mentioned OpenAPI. For an like-to-like comparison you probably should have used that. That immediately shows you that REST couples you just as tightly, it's just that most people don't write out the API and thus create the impression of decoupling. So, concluding: there is not as much difference as it seems, but people can ignore the specification of the contract with REST, which indeed makes it simpler to start with. So, are you resource oriented or action oriented? That should be the question.
I think that gRPC would make sense for large enterprises where a single request from an end user causes a cascade of microservice calls. The performance boost and strict boundaries would make sense for this scenario, and the additional benefit of lowering computational costs at scale will also be pretty significant (iirc json encoding/decoding takes 20-30% more CPU than protobuf packing/unpacking).
The issue isn't the tecnology, but rather the surface area between the applications using RPC for communication. Never expose your domain object, that's a trash pattern that's going to make need to change your endpoints everytime something changes inside. Only relay what's vital to the endgoal!
I am old enought to remember when SOAP was the shiny new thing.
I remember COBRA - Common object request broker architecture. Also RPC the original ha.
Yeah, every now and then I still get to maintain the crappy mess I built on it back then. I really don't have nostalgic feelings about the code I wrote in my younger days :/
@@supercompooper I do too, but wasn't it 'CORBA' Common Object Request Broker Architecture ?
There was XML-RPC too.
Odoo (aka OpenERP) still uses that.
you should look into json transcoding. i would never implement a rest service ever again. gRPC is just so much more convenient. and about the distribution of the proto files, i always have a dedicated project/folder that deploy packages each technology's package repo
At Google, protos are used for almost all internal communication between services. "Where should the .proto file be?" is solved by a monorepo that holds code for vast majority of projects. It would be present in a directory that belongs to the service exporting the interface and importable by anybody that needs it.
We switched from SOAP to gRPC because we needed the tight coupling with contracts but also wanted better backward/forward-compatibility which you can more easily achieve with gRPC if you just follow some rules while creating/altering your proto files.
On where to store the proto, I store the proto files in a repo and also the generated codes in another and expose the generated code as a library that can be imported by anyone and be used. The process could be automated with some pipelines to generate from the latest proto files and it can be versioned for an easy version tracking.
Micro repo for each interface/.proto or bigger repo with group of .protos?
inner communication via grpc and outer via rest. btw when you write REST please use swagger and generate server&client from it
Learning about Grpc for the first time today.
Thank you for this excellent explanation! You manage to explain concepts in a very understandable way without assuming too much but still transport a lot of experience 👌
so with OpenAPI (former swagger API) you can define a contract for a REST interface with a yaml file an also generate client or server code.
There are also clients which can show those yaml files very readable and also wich clickable demos.
Not a monolith. A monorepo. The way Google wants to maintain their stuff. You see a lot of things in gRPC that indicates that Google style of doing things.
I also had to search for a package that compiled proto to typescript and wasn't a horrible commonjs mess that the default protoc tool outputs. gRPC is really not made for browser apps.
I use nice-grpc & ts-proto for my TypeScript project. ts-proto will generate the necessary TypeScript code from your .proto files and you can specify the js version you want too.
NCBI has been successfully using gRPC in its backends for several years. There are a number of high throughput, low latency use cases (the Datasets product comes to mind but I've seen it in others.) And the compiled contract helps keep multiple teams in sync and make API transitions explicit. But it's not a silver bullet. APIs NCBI exposes to the outside world are usually "REST."
One of our team had an issue because of gRPC sticky connection.
The team didn't has a good way to load balancing gRPC connections from the client side and kept using the same connection to send multiple urany calls to the same service. So, they kept spamming requests to the same service instance, even though the service had spawned a new instance to help handle extra loads.
They unnecessarily overloaded the service and degraded their own service as well.
Happy to see Go code in a video by Arjan :)e Go code in a video by Arjan :)
Hi Arjan, do you have any experience in Sun's implementation of RPC, also used by NFS? Is there any conceptual difference?
Just one thing: JSON might be used a lot with REST interfaces, but the representation of the data is actually irrelevant. You could, if you wanted, use Protobufs, Avro, Thrift, or any other serialisation mechanisms you likes. Much of the apparent overhead of REST is actually marshalling overhead, and using representations that minimise/eliminate that overhead important when it comes to performance. JSON just ends up being the lazy least common denominator, even when it's more appropriate.
We use grpc in embedded device that is low powered but has strict requirement on performance. When there is a trigger multitide of services runs at the same time, and it has to produce result in preferably 1 second.
So far we are happy with it. But talking to outside world is still REST and Websocket.
I don't get the part where you say grpc makes services tightly coupled, because they are contracts that the services must abide by!
14:04 - Git Submodules!
...really suck
@@garorobe It's a solution nonetheless. But can you elaborate why it sucks to you?
Chances are, you aren't actually using REST.... Unless you are full OData. gRPC also handles aspects of contract management between services, forcing you to define the API layer and generating the clients and server stubs, which I find to be of high value.
Web APIs if you don't use Open API + versioning or OData can become cumbersome to maintain, but that's already more work than gRPC for most languages.
14:08 do you mean using git submodules when you say that they each access the repo containing proto files?
You can use both depending on your needs. It’s a balance between development complexity and speed
I had implemented a gRPC, and switched it back to REST. It was just too opinionated on authentication. Either you use SSL or have no authentication at all, and the Python implementation apparently has broken features regarding root certificates.
So if you want to use gRPC with Python, you have to pay a third party SSL certificate provider if you want anything other than a completely open gRPC server
Why not use LetsEncrypt or Cloudflare? No one pays for web TLS anymore.
Would love to see the in - depth libraries videos but with go ones :)
10:08 What do you mean? Almost 100% of all browsers and all web servers support HTTP/2.
@MrGiovajo - gunicorn - doesn't support it, uvicorn - doesn't support it etc. There is a lot of webservers that don't support HTTP/2.
While browsers support HTTP2, there's no way to write custom HTTP2 with trailing headers as required by gRPC, and allowed by the standard.
So instead, people use a special proxy that will accept requests in a different way and then forward it as a proper request to your gRPC server.
In general, gRPC goes beyond HTTP2 and I've seen some using it over many types of transport.
@ Thanks for the clarification!
You can also use buf, it's way easier
At 15:40 you said that there's no typing support in REST. When you work with FastAPI + pydantic as you do in the example, you will have the `openapi.json` interface autogenerated for you which includes all names, return values and types. You can use this openapi.json file as input for (for example) the openapi-typescript-codegen typescript generator. So you will have a typed version of your python REST API in your typescript app (you just need to run the automatic code generator). Problem solved.
gRPC sounds very promising for efficient communication between services, especially with features like streaming and strong typing. Have you considered how OTP's native mechanisms, such as GenServer, could be leveraged for similar goals? In certain cases, it might offer even tighter integration and fault-tolerance within the BEAM ecosystem. I'd love to hear your thoughts on where these approaches might complement or diverge.
14:38 grpc should generate type annotations automatically if you pass in the flag --pyi_out (or something like that) to protoc
I really like the idea of gRPC, but I have never had a good use for it either at work or in my side projects.
There are applications that use both gRPC and REST; it all depends on the needs of the application.
gRPC is probably preferred if Your domain doesn't map clearly to a "CRUD model" - let's say it isn't saved in a persistent layer, or calling remote pure functions.
As always, both REST and gRPC could be used for example "creating user" or "return a random number" as well.
My 2 cents.
I’m looking forward episode about GoLang 😊
grpc lib allows to generate pyi files from proto
That's what I use too.
how did you know that i'm searching the way to write interfaces right now? :)
A distributed system doesn't do RPC at all (except in some specific case) otherwise is not distributed. It's just a way to increase employment rate for software engineers
Not gRPC but custom rpc as internal api server could have more advantages than rest, because sometimes functions have to be just functions…
I think gRPC could complicate things unnecessarily
gRPC is not about solely performance gains. Exposing a function through gRPC is a cakewalk. In REST, you end up in a bikeshredding like whether this function should correspond to which HTTP method. Each person has its own understanding of what RESTful is.
another on point video. thanks
You’re welcome!
A Go course please ( advanced topics only ).
Why do people make videos about topics they're completely ignorant about? Why should ANYONE listen to you after you've used gRPC for all of 7 minutes, and don't even discuss the other alternative options in the space?
I had the recent misfortune of having to use a gRPC application and whilst it does have a great deal of precision it was really awful to use. I would say that gRPC ensures absolute compatibility between two different systems and is somewhat language agnostic but for simple tasks is way too overkill.
For most things, just like to send a simple message and get a simple reply, and not have to generate an entire communication schema. But when you're dealing with massive, massive systems that are completely interdependent, it can be a useful thing to know precisely what you're getting.
Maybe it's a bias of working on production systems but it's hard for me to imagine preferring loose/unexpected behavior.
Wait I thought browsers did support HTTP2 with TLS, no?
importantly, it's not what you want coming in from the web browser. you need to upload and download files; such as mp4 movies that need ot actually be rendered.
here is a situation where GRPC might work: define the binary protocol between multiple services with a BNF grammar, where they can send messages. You effectively have message-passing state-machines. Define it so that it's never ambiguous about when you must be listening for bytes, vs trying to send them. If you have a message-passing state-machine; then GRPC might not be a mistake. Also, code generation is a pain in the ass for your build pipeline.
Dude all Google REST APIs are authored using proto files and use gRPC. How is that? Well there is a gRPC Gateway that renders REST from gRPC …wow…
14:10 you can keep all the proto files in a repo and include it as git submodule in each of your service repos. but my preference is monorepo approach.
In Python gRPC is terrible and I would prefer REST
Why is gRPC terrible in python?
Please elaborate.
@@bigmacbeta with REST you can make simple reqeusts, while gRPC uses elaborate generated code
Actually grpc allows two methods of using .proto files. Dynamically or statically.
Dynamically: codes are not generated.
Statically: the codes are generated for you.
I use a library called grpclib when working with grpc in Python instead of the official grpc.
I combine grpc and rest.
I use rest for the api-gateway while I use grpc for service communication.
I assume the generated code is more performant, that's generally the case.
Anyone tried with Rust?
My favorite answer for what g stands for in grpc is the recursive explanation, where g stands for gRPC 😂
gRPC: Remote Procedure Call
It stands for Google. We all know it. It's why Go is called Go. It's not a well hidden secret
@@chudchadanstud that's a wrong answer, in each version g stands for something different, I remember even "gladiator" in the long list.
@@yodo-y3i No it means Google. They're just sugar coating it. Why would Google use g? Why not any other letter?
We used gRPC for our internal Go microservices. Then we moved to REST and our memory allocations cut to half. Why? Because the structs created by gRPC did not fit our need. So we needed to remap and reallocate each time we passed a struct. Also we did not need streaming.
I highly recommend you avoid gRPC or if you think you are an exception, make real-world benchmarks to prove that you are actually seeing any performance improvements.
Maybe you used pointers more than it was needed?
@MrLotrus well, if you use gRPC, you don't have much choice, since it adds pointer to every single thing. It contains mutexes as well. Which I'm not sure why.
But there are other things. For example, the structure of the structs usually don't compile to a very idiomatic Go. Their name is clearly C++ inspired. You also have types that don't have clear equivalent in gRPC. For example UUIDs. So if you use a uuid all over your application, you either have to define a struct in gRPC just for the uuid, or you have to parse strings into UUIDs to make sure that the string that had been passed by is a valid UUID. But the whole point of performance for gRPC is that it has serializes and deserializes better than JSON. But the thing is that you only rip that benefit if the struct that you serializing/deserializing is exactly the same struct that was generated by the Protobuf compiler. When it doesn't, you have to convert and map every field. Which, not only takes a long time, it has a lot of branching and allocations all over it.
Doing http json with something like sonic which rips the benefits of SIMD and fiber which tries its best to avoid allocations does wonders. And it doesn't force us to write mappers all the time which means that we have time to better optimize our algorithms as well. Which is a magnitude more effective than the benefits the non-self-describing binary data format that loves putting pointers and mutexes everywhere can do.
Although to your point, we have tested the most simple case of gRPC vs the fiber+sonic. It was not as horrible as the real world one, just 10% less allocations, but around 200% slower anyways.
I have to mention that I'm not sure if it was 200% or 400%. And the compiler version sometimes matters. Although newer versions don't always mean "better".
Generally you shouldn’t use the essentially old school Dto’s anywhere outside the service implementation. They carry too much metadata in most applications. You should explicitly map to internal models. But this is also true if you use some of the OpenAPI generators.
@EraYaN yes. That is true. But you see, the generated code by gRPC, is less about the language the libraries are generated themselves, and more about the way you can define the messages on protobuf. The end result is usually unidiomatic, full of needless pointers, and does not map well to how may want your json apis and/or ORMs to represent the data. The generated code is also well organized. You get a huge file with tons of functions and interfaces that don't have any uses for you. And here, in. The necessity for remapping data on every single edge of every single application you get a guaranteed mapping. This is less so with json apis (be they rest or amqp). Why? Because your typical backend definitely has to represent every form of data using json. And your usual definitions are written in the form of the language the end points are generated for. So you only need to remap the data when you need to change the structure of the data.
I have to admit I never have done much code generations with OpenAPI definitions. I usually did it the other way around. So I'm not aware of the peculiarities of those generators.
Very funny how CORBA comes back
I really don't understand your argument that the services become coupled to each other simply because they are using the generated code. Whenever you use a thirdparty library, like s3 or whatever, you should adapt to it to not couple your code base to it, so i dont understand what you are actually saying. I generate openapi rest spec and you can make the url for other services to generate it too, by fetching the spec for your self. Please please have a single point for your definition/spec. The second you dont have a spec, bugs ensue, purely from deviation from expectation vs reality. But generating code from the template will reduce bugs. Your argument or rest vs grpc, i personally dont see a single difference in outcome of code design. Only one forces you to generate, but you should generate anyway if you can, or use their library if it exists. Or make your own library to interface with the gebrated code, a simple layer. Your just sending data and recieving with a contract at the end of the day from a code design perspective. So idc between grpc and rest. Othe perspectives should make your decision though like team skill, experience, any technical advantages? I dont see many. Id just boil it down to how it fits into the team, the project and organisation. If someone else wants to use your service that is not part of your management...you can have rest and grpc together, to let them choose how they want to call your api.
跟着朋友一块交易USDT因为点事闹掰了 她给了我个OKX钱包的码 ----------- 说让我把剩下的USDT提出来 这是什么啊 怎么搞啊 求各位告知下
Hi, my name is Jon, and I'm a typeannotationaholic.
Do golang videos please! 🙏🙏🙏
feedback :: Your videos are getting too long - great content but has to much of the talking, keep it up
why do you all use thumbnails with pointing your finger...
I wouldn’t use REST in the first place…
So, why don’t most people use gRPC?
Please do not laugh, I still use CORBA.
16:42 lmao
If you think javascript is a good prgramming language you probably wont like grpc.
Only available to google chrome.
++go
The opening alone made me quit the video. Stating gRPC, a framework extending the RPC Protocol, is a alternative to REST, a pradigm in software architecture, makes it obvious that you don’t have the needed knowledge to teach about it.
Keeping all microservices isn’t a monolith, it’s a monorepo. In my experience that’s the best approach, even without gRPC. It’s way easier to manage versioning. Monorepos are way more frictionless than multi repo services.