What about Domain Driven Design where you would have a lot of shared business logic code? Would you actually have the same code being deployed numerous times for each lambda? Wont this affect the size of the builds and subsequently the cold start times? Can Domain Driven Design be applied in a Serverless context? Could you make a video with an example if possible? What about the trick with caching data that is instatiated outside lambda handlers between subsequent lambda calls? Could you do something simmilar using the approach mentioned in your video? What about if you have other serverless services that your lambdas depend on? For example sns/sqs, sql databases on the cloud, lambda step functions etc. Can you combine the solution mentioned with the Serverless framework?
Currently working on the FE of a project which uses aws lambdas for API and the cold start delay is a pretty big issue for the product owners, I don't understand at all why those would be used generally and not only in specific scenarios where the user would not wait for the API result
Yes, unfortunately been burned here already. Its just not usable for public facing things without a lot of micro-optimizations and many code/nuget restrictions to get the cold start time down, even with ReadyToRun. Unfortunately including the AWS SDK already balloons the cold start time to a pretty high baseline. General rule of thumb is "magic", DI, reflection and libraries are going to push your cold start time up. Lots of shared code will also push it up.
I've been using ASPNET dotnet APIs in AWS Lambda for over a year now. The cold starts average to 1.4 seconds when runing 1536MB. That's acceptable for my API. These are cold-starts mind you. So once there are a handful of lambdas running, they are a non-issue. YMMV depending on what your customer needs dictate.
Cool option. I'm personally enjoying using Minimal APIs on Azure Container Apps for deploying serverless functions. A little more setup at first, but the built-in KEDA autoscaling and serverlessness is still there. And, the Dapr integration is a nice-to-have as a future opt-in feature sets that bring queues, inter-service communication, secrets management, virtual actors, storage connectors, etc, as needed.
hey i was testing this and i just found out a little bit of a block, im kinda new to conatiners and i was reasearching a bit, and the container app sounded a nice thing, i saw i needed to use a repository and those cost money, so whats the advantage of using something like the container app with a free tier and a container registry that costs money I mean over the azurefunctions or the lamda functions or am i missing a lot since i dont know much about containers.
@@KhaUh good question. I'm not an expert, but I suspect it depends on your use case. Surely, Functions are using the registry under the hood as well, and per-invocation they might be pricey at high volume. The Basic tier of the registry is $5/mo, and a typical Azure sign up will net you $200 in credits, so you could go for a long while with that. But I might be missing some important facts - does the free tier of Container Apps include the registry? Can you substitute an alternative registry from elsewhere?
Very nice. How this integrates with production though, I mean, custom domains mapped to api gateway, load balancer etc.? I'm not familiar with this serverless.template that I see and it doesn't look like Serverless Framework.
Yeah SAM is a good approach. But it will create a new lambda function for each API endpoint. Is this the right way to do in real-time large applications?
I think this is great. I already don’t write API controllers, I just use a source generator from my business processes. It’s a pity that, as far as I know, source generators cannot execute against source generated code.
Why didn't they just build their source generator around the existing controller annotations? It looks like they defined their own parallel annotations that make migration more difficult and also makes this code impossible to run outside of a lambda execution context.
Because it wouldn't make sense and thank god they did them. Controllers are just one of the paradigms and its also an MVC one. It wouldn't make sense to have an IActionResult when there isn't the concet of a traditional Controller Action.
@@nickchapsasapologies, but I still feel it would make sense, especially given your video showing conversion of a controller to these new annotations. There's a one-to-one correspondence for the AWS annotations which made that an easy process. Why not let an MVC specific extension to these source generators do that conversion for me? There's also already official library support for mapping the MVC request pipeline and IActionResult execution onto lambda execution contexts.
How is this superior to just using Amazon.Lambda.AspNetCoreServer and a proxy endpoint API gateway and then coding the .Net API exactly as you always would with DI etc.? The only thing I can think is that your individual lambda code would be smaller as the lambda's wouldn't be shared. Edit - I Suppose doing it this way would actually give each individual endpoint it's own endpoint in APIGW so that you can use AWS features specific to the endpoint. Also this method is easier with deployment via stacks and the other method would allow terraform to be easier.
Wasn't this working for Azure Functions for a long time already? With isolated approach you have everything the same with middlewares and there is actually no difference running that
Doesn't that run an actual long running host behind the scenes for the function? This is quite different. AWS does this with the equivelent of the in-process Azure Function approach. AWS Lambda does also allow you to run your actual ASP.NET APIs with the same approach but this is quite different and I've shown that in a different video. Also as far as I remember, isolated worked process still uses the HttpTrigger attributes with the HttpRequestData request object and the FunctionContext. Can you map route parameters, query string patameters and request objects/bodies in the same way?
@@nickchapsas - in general dont understand why HTTPTriggers is bad? Aren't we try to use using Functions to work as an API, so why is http trigger is bad? - As for isolated process, it's considered to be a future and if we will look at the azure functions road map from microsoft we will see that inprocess will be deprecated. Isn't the idea of isolated a better approach so we could separate our functions code from the host itself? 1. Route parameters they can be binded in the function declaration and is easy to do something like this public async Task UpdateUserProfile_v1([HttpTrigger(AuthorizationLevel.Anonymous, "put", Route = "v1/users/{id}")] HttpRequestData req, string id) 2. Getting request body, request validation, get request data from headers, etc it's a oneliner of infrastructure code as an extension method to HttpRequestData and is not bothering at all and is written one time at the beginning of the project. Also you have full control of middlewares where you can do whatever you want. 3. For isolated functions the approach is to return HttpResponseData instead of IActionResult.
Hi One question,. Will server-less function work, if api needs to use signalR. Also will it still save cost ?. What i understand from server-less , is when no endpoint is hit by a client for sometime. Server goes into hibernation and when it receives an api request it cold restarts itself. So if i have a signal R connection and signalr constantly sends heatbeat, does this mean server will never go into hibernation and hence no cost is saved ?
You should have a separate service for SignalR Hub. In Azure it could be Azure SignalR Service. Your server-less function should communicate with this hub instead of owning it .
@@slepcu i see and that server-less function have to run 24/7. And we cannot use IHubContext. so every other server-less function that want to access hub will have to make separate HubConnection and then destroy it. Frequently creating and destroying, Is this good approach ? or there is a better way around this
In AWS you need to use the API Websocket Gateway. Currently, this does not directly support SignalR. You can hack it to work, but its easier not to and just use the AWS Lambda/API Websocket Gateway stuff. WORD OF WARNING: If you are using SignalR, this implies its a public facing API. dotnet is NOT ready for building public facing APIs with Lambda, the cold starts are too severe. Choose something with a good cold start time, there are plenty good options. Don't make the mistake of getting close to launch and realize you have a major problem that's going to require rewrites.
DI is not complex or as cumbersome as it used to be, especially in Azure. Isolated functions allow you to use dependency injection just as you would in a web API project.
I understand that .Net lambdas have a longer build time than go, nodejs or python lambdas. Why with each request to the same endpoint the response times were decreasing? I thought that the lambdas were built and discarded with each call. Is there some kind of build cache?
They are not built and discarded on every request. They are built before they are deployed. Then you first call them they are "started" from a cold state which can take some time because they need to be loaded and run. After that they are kept in a "warm" state where you can call them and they will return pretty fast because they are effectively running. After not being called for some period of time (around 15-30 minutes) they will be "unloaded" and you'll have a cold start again but only on the first call.
The start/stop behavior of lambdas isn't all documented because it is considered an "implementation detail" subject to change, but we can observe some behavior and get some insight. When the first request comes into lambda, it is "cold", the request runs and now an instance of your lambda is "hot". The instance will remain "hot" until it is idle for an undefined amount of time (15-20min in practice). AWS also will randomly kill hot lambdas, probably while shuffling around resources (not something you control). If you are under load, you will also experience multiple cold starts as AWS scales your lambda up. 10 concurrent incoming requests = 10 cold starts etc.
Serverless means each lambda is 1 separate function. If you are deploying an entire 'jumbo mega WTF' API system into 1 poor weak lambda, you're doing it wrong and it will be TRASH. Stop doing/showing this crap. You cant just 'convert' your huge API into a lambda and expect some change, it needs different coding style and approach its not for everyone you need to relearn coding.
I hope that one day M$ will sponsor you and You will create this kind of videos and courses related to Azure :)
What about Domain Driven Design where you would have a lot of shared business logic code? Would you actually have the same code being deployed numerous times for each lambda? Wont this affect the size of the builds and subsequently the cold start times? Can Domain Driven Design be applied in a Serverless context? Could you make a video with an example if possible? What about the trick with caching data that is instatiated outside lambda handlers between subsequent lambda calls? Could you do something simmilar using the approach mentioned in your video? What about if you have other serverless services that your lambdas depend on? For example sns/sqs, sql databases on the cloud, lambda step functions etc. Can you combine the solution mentioned with the Serverless framework?
Currently working on the FE of a project which uses aws lambdas for API and the cold start delay is a pretty big issue for the product owners, I don't understand at all why those would be used generally and not only in specific scenarios where the user would not wait for the API result
Yes, unfortunately been burned here already. Its just not usable for public facing things without a lot of micro-optimizations and many code/nuget restrictions to get the cold start time down, even with ReadyToRun. Unfortunately including the AWS SDK already balloons the cold start time to a pretty high baseline.
General rule of thumb is "magic", DI, reflection and libraries are going to push your cold start time up. Lots of shared code will also push it up.
Did you try AOT?
I've been using ASPNET dotnet APIs in AWS Lambda for over a year now. The cold starts average to 1.4 seconds when runing 1536MB. That's acceptable for my API. These are cold-starts mind you. So once there are a handful of lambdas running, they are a non-issue. YMMV depending on what your customer needs dictate.
azure functions should step up the game like this
Damn you Nick, I just searched for Nick the Greek 2 to check how it's rated on IMDB !
Cool option. I'm personally enjoying using Minimal APIs on Azure Container Apps for deploying serverless functions. A little more setup at first, but the built-in KEDA autoscaling and serverlessness is still there. And, the Dapr integration is a nice-to-have as a future opt-in feature sets that bring queues, inter-service communication, secrets management, virtual actors, storage connectors, etc, as needed.
nice. i still need to use contain apps, i forgot about all the KEDA goodness
hey i was testing this and i just found out a little bit of a block, im kinda new to conatiners and i was reasearching a bit, and the container app sounded a nice thing, i saw i needed to use a repository and those cost money, so whats the advantage of using something like the container app with a free tier and a container registry that costs money
I mean over the azurefunctions or the lamda functions or am i missing a lot since i dont know much about containers.
@@KhaUh good question. I'm not an expert, but I suspect it depends on your use case. Surely, Functions are using the registry under the hood as well, and per-invocation they might be pricey at high volume. The Basic tier of the registry is $5/mo, and a typical Azure sign up will net you $200 in credits, so you could go for a long while with that. But I might be missing some important facts - does the free tier of Container Apps include the registry? Can you substitute an alternative registry from elsewhere?
Hey Nick - Any idea where in your aws-videos repo we can find your annotations project?
Thanks for sharing this, may I know how to debug these? Like connection to the dynamodb?
I’ve been using azure functions for 2 years now and it’s the best
Have you used isolated azure functions?
You better respond to my comment Nick!
I have not
Its awesome, but how could we get specific secret manager (hml or prd) and use it with DI?
Very nice. How this integrates with production though, I mean, custom domains mapped to api gateway, load balancer etc.?
I'm not familiar with this serverless.template that I see and it doesn't look like Serverless Framework.
Yeah SAM is a good approach. But it will create a new lambda function for each API endpoint. Is this the right way to do in real-time large applications?
I think this is great.
I already don’t write API controllers, I just use a source generator from my business processes.
It’s a pity that, as far as I know, source generators cannot execute against source generated code.
This is awesome! Thanks for sharing Nick.
@Nick Chapsas congratulations, you even have s(p/c)ammers impersonation you! You've truly made it.
Another q: can this be deployed with native AOT to beat cold starts issue?
Why didn't they just build their source generator around the existing controller annotations? It looks like they defined their own parallel annotations that make migration more difficult and also makes this code impossible to run outside of a lambda execution context.
Because it wouldn't make sense and thank god they did them. Controllers are just one of the paradigms and its also an MVC one. It wouldn't make sense to have an IActionResult when there isn't the concet of a traditional Controller Action.
@@nickchapsasapologies, but I still feel it would make sense, especially given your video showing conversion of a controller to these new annotations. There's a one-to-one correspondence for the AWS annotations which made that an easy process. Why not let an MVC specific extension to these source generators do that conversion for me? There's also already official library support for mapping the MVC request pipeline and IActionResult execution onto lambda execution contexts.
Hi Nick! I noticed you're still using old-fashioned Rider UI. Have you tried the new one? It can be enabled at the Appearance Settings
I'm intentionally using the old one 😂
@@nickchapsas okay got it :)
I saw your comment and tried the new one, and then I went back to the old one 4 min later 😂
@@richardarielcruzcespedes9455 i've had it for 5 minutes and still using it :D
@@richardarielcruzcespedes9455 why so?
pierwszy, pozdrawiam Cie Nick jesteś giga kozakiem
+1 byniu
Great job! something similar might exist in Azure Function? in theory. If yes, I think it will be interesting to see it
I wish this video was released 2 months ago, before my feature started :) Do you know maybe if it's any way to prevent Function to be a Singleton?
How is this superior to just using Amazon.Lambda.AspNetCoreServer and a proxy endpoint API gateway and then coding the .Net API exactly as you always would with DI etc.?
The only thing I can think is that your individual lambda code would be smaller as the lambda's wouldn't be shared.
Edit - I Suppose doing it this way would actually give each individual endpoint it's own endpoint in APIGW so that you can use AWS features specific to the endpoint. Also this method is easier with deployment via stacks and the other method would allow terraform to be easier.
Wasn't this working for Azure Functions for a long time already? With isolated approach you have everything the same with middlewares and there is actually no difference running that
Doesn't that run an actual long running host behind the scenes for the function? This is quite different. AWS does this with the equivelent of the in-process Azure Function approach. AWS Lambda does also allow you to run your actual ASP.NET APIs with the same approach but this is quite different and I've shown that in a different video. Also as far as I remember, isolated worked process still uses the HttpTrigger attributes with the HttpRequestData request object and the FunctionContext. Can you map route parameters, query string patameters and request objects/bodies in the same way?
@@nickchapsas
- in general dont understand why HTTPTriggers is bad? Aren't we try to use using Functions to work as an API, so why is http trigger is bad?
- As for isolated process, it's considered to be a future and if we will look at the azure functions road map from microsoft we will see that inprocess will be deprecated. Isn't the idea of isolated a better approach so we could separate our functions code from the host itself?
1. Route parameters they can be binded in the function declaration and is easy to do something like this
public async Task UpdateUserProfile_v1([HttpTrigger(AuthorizationLevel.Anonymous, "put", Route = "v1/users/{id}")] HttpRequestData req, string id)
2. Getting request body, request validation, get request data from headers, etc it's a oneliner of infrastructure code as an extension method to HttpRequestData and is not bothering at all and is written one time at the beginning of the project. Also you have full control of middlewares where you can do whatever you want.
3. For isolated functions the approach is to return HttpResponseData instead of IActionResult.
Hi
One question,.
Will server-less function work, if api needs to use signalR. Also will it still save cost ?. What i understand from server-less , is when no endpoint is hit by a client for sometime. Server goes into hibernation and when it receives an api request it cold restarts itself. So if i have a signal R connection and signalr constantly sends heatbeat, does this mean server will never go into hibernation and hence no cost is saved ?
You should have a separate service for SignalR Hub. In Azure it could be Azure SignalR Service. Your server-less function should communicate with this hub instead of owning it .
@@slepcu i see and that server-less function have to run 24/7. And we cannot use IHubContext. so every other server-less function that want to access hub will have to make separate HubConnection and then destroy it. Frequently creating and destroying, Is this good approach ? or there is a better way around this
In AWS you need to use the API Websocket Gateway. Currently, this does not directly support SignalR. You can hack it to work, but its easier not to and just use the AWS Lambda/API Websocket Gateway stuff.
WORD OF WARNING: If you are using SignalR, this implies its a public facing API. dotnet is NOT ready for building public facing APIs with Lambda, the cold starts are too severe. Choose something with a good cold start time, there are plenty good options. Don't make the mistake of getting close to launch and realize you have a major problem that's going to require rewrites.
DI is not complex or as cumbersome as it used to be, especially in Azure. Isolated functions allow you to use dependency injection just as you would in a web API project.
I understand that .Net lambdas have a longer build time than go, nodejs or python lambdas. Why with each request to the same endpoint the response times were decreasing? I thought that the lambdas were built and discarded with each call. Is there some kind of build cache?
They are not built and discarded on every request. They are built before they are deployed. Then you first call them they are "started" from a cold state which can take some time because they need to be loaded and run. After that they are kept in a "warm" state where you can call them and they will return pretty fast because they are effectively running. After not being called for some period of time (around 15-30 minutes) they will be "unloaded" and you'll have a cold start again but only on the first call.
The start/stop behavior of lambdas isn't all documented because it is considered an "implementation detail" subject to change, but we can observe some behavior and get some insight.
When the first request comes into lambda, it is "cold", the request runs and now an instance of your lambda is "hot". The instance will remain "hot" until it is idle for an undefined amount of time (15-20min in practice). AWS also will randomly kill hot lambdas, probably while shuffling around resources (not something you control).
If you are under load, you will also experience multiple cold starts as AWS scales your lambda up. 10 concurrent incoming requests = 10 cold starts etc.
Thx for sharing.
Looks good
Nick PLEASE SHOW THIS VIDEO TO DAVID FOWLER and make him do the same for azure functions
GCP has... something
🤣
oOooOoOo beautiful.
🤯
Serverless means each lambda is 1 separate function. If you are deploying an entire 'jumbo mega WTF' API system into 1 poor weak lambda, you're doing it wrong and it will be TRASH. Stop doing/showing this crap. You cant just 'convert' your huge API into a lambda and expect some change, it needs different coding style and approach its not for everyone you need to relearn coding.