Edge servers tend to be weaker virtual machines. Their advantage is their proximity but they have a disadvantage of weaker machines, which could be a significant factor if your requests are taking 200ms+ it sounds like there's either a decent amount of compute happening, which would make the weaker machines more noticeable, or there's a decent amount of network io, which could be related to what you were showing with distance between servers and DB.
oh yeah there's definitely some compute, this is nowhere near an empty api route. Just the relative differences between edge and non-edge were a bit surprising to me
Edge includes additional layer, additional routing, stack and network specifics of the provider. They also do analytics, collect telemetry and a bunch of other things which can just slow down the request.
Maybe the extra delay is because Edge Functions are running on Cloudflare compared to regular Serverless Functions on AWS. And with the both Upstash and PlanetScale running primarily on AWS, the connection inside of AWS might be faster.
I’m confused, isn’t the downside of this that you’d be making unnecessary calls if the 2nd func only needs to be executed conditionally? Yes the route takes less time now, but if you have a very expensive func, you’d be running that constantly.
@@joshtriedcoding maybe I’m missing something, but according to your diagram doesn’t this mean that it needs to anyway wait for the first command to return to then run the second? So how is it different from awaiting the command?
blocking means the client usually waits and doesn't do anything else until it receives the server response, doesn't that block the process/thread reading the response? Sequential sounds like a good term to describe this either way
@@joshtriedcodingblocking in multithreading means the one which blocks thread and make it unusable at all. await waits for the execution but doesn't blocks which is the main difference. I'd avoid usage "blocking" terminology in this case, as none of the threads were blocked.
I don't know the specifics of your code, but something that I encountered when using mongodb and next.js was that a new connection to the database was initialized on every incoming request. I fixed this by caching the connection. I also believe that the type of runtime may cause the same thing to happen, since some runtimes doesn't support long-lived connections.
for this scenario - its seems like scalable, fast and popular. But i'd say its quite a misuse and something event-driven and/or actor-based will be more suitable)
@@11r3start11 The built in TTL is super handy, it's fast because it's in-memory and beyond key-value pairs and some simple hashes there are no complex relations. Not sure what you mean by misuse
actually Theo has a video answering my question on stream about the exact same topic (when edge was just introduced). Theo said when he recommends using the edge, he talks about the RUNTIME. not the location. because of the same issue you figured out by yourself. you can config the Vercel's edge to only run in specific region (which you'll want next to your DB), but still use the good and fast runtime of edge. here's the video I'm referencing: ua-cam.com/video/UPo_Xahee1g/v-deo.html (I'm so hyped about it because it was the first time he noticed me on stream 😆)
Nice Can u please make a video hoe to handle cache and invalidate cacahe in large relational database and how to setup keys with prisma and redis please
Stop using a baby language and use Rust or C++ and you have a 100/200% speed increase. Us system developers are like 250ms pffff kill off the API guys this adds too much overhead.
@@joshtriedcoding Yeah it always humors me when I hear JavaScript and Python devs talk about performance, when their initial choice to use those languages for a backend should be at the very least eyebrow raising. And they are ugly large bloated languages. I like small lean and mean languages, they are also more robust
This could have been a short. You just batches requests together using redis-pipeline. It's not a trick but a common occurrence in all products. Disappointed with the clickbait 😞
Impressive, very nice. Let's see Paul Allen's optimization.
dude I really appreciate you for commenting so often. Mhmm very nice, it has such an impressive THICCNESS to it
Your complient was sufficient, Josh.
@@BeyondLegendaryLmfao
@@BeyondLegendary hahaha
nearly by half -> nearly 40% -> in the end 34%
just tell us the truth even if it doesn't sound so flashy 📸
fair point
Edge servers tend to be weaker virtual machines. Their advantage is their proximity but they have a disadvantage of weaker machines, which could be a significant factor if your requests are taking 200ms+ it sounds like there's either a decent amount of compute happening, which would make the weaker machines more noticeable, or there's a decent amount of network io, which could be related to what you were showing with distance between servers and DB.
oh yeah there's definitely some compute, this is nowhere near an empty api route. Just the relative differences between edge and non-edge were a bit surprising to me
Oh damn... another day, another new thing I'm learning from you 👍💪
cheers kavin!
Edge includes additional layer, additional routing, stack and network specifics of the provider. They also do analytics, collect telemetry and a bunch of other things which can just slow down the request.
Maybe the extra delay is because Edge Functions are running on Cloudflare compared to regular Serverless Functions on AWS. And with the both Upstash and PlanetScale running primarily on AWS, the connection inside of AWS might be faster.
Conceptually, it acts like a DB transaction. Alternatively, in some cases it would be possible to conditionally merge the operations.
I’m confused, isn’t the downside of this that you’d be making unnecessary calls if the 2nd func only needs to be executed conditionally? Yes the route takes less time now, but if you have a very expensive func, you’d be running that constantly.
if the condition doesn't run, the command will not be added to the pipeline and will not be executed
@@joshtriedcoding maybe I’m missing something, but according to your diagram doesn’t this mean that it needs to anyway wait for the first command to return to then run the second? So how is it different from awaiting the command?
What you're calling "blocking" requests are not actually blocking, since you use async/await. The correct term is "sequential".
blocking means the client usually waits and doesn't do anything else until it receives the server response, doesn't that block the process/thread reading the response? Sequential sounds like a good term to describe this either way
@@joshtriedcoding `await` doesn't block the thread, quite the opposite - it allows thread to process other Promises in the meantime.
@@joshtriedcodingblocking in multithreading means the one which blocks thread and make it unusable at all.
await waits for the execution but doesn't blocks which is the main difference.
I'd avoid usage "blocking" terminology in this case, as none of the threads were blocked.
Any such feature in axios ? I guess we could only opt for SWR for suck kind of optimisations
Hey, is programming your daily job, or what do you do for living?
You can configure the vercel edge locations, then they will definitely be faster than serverless
Sehr informatives Video Danke dafür!
Hey Josh, I am using MongoDB with next js and its super slow, 15-20 seconds each request. Same API takes only 500ms in Express or Nest JS
oh wooow it should not take 15-20 seconds
I don't know the specifics of your code, but something that I encountered when using mongodb and next.js was that a new connection to the database was initialized on every incoming request. I fixed this by caching the connection. I also believe that the type of runtime may cause the same thing to happen, since some runtimes doesn't support long-lived connections.
Check your functions' location, I believe by default it's set to Washington. You can find that under Settings > Functions in your Vercel project page
Standard nextjs user
@@miguderp , No I have set it to nearest, also I am saying about my local.
use vercel's regional edge, which would only use edge workers near your database region
How do I do this for springboot ?
why did you decide to use Redis for this project?
great question 👍
cause its fast
for this scenario - its seems like scalable, fast and popular. But i'd say its quite a misuse and something event-driven and/or actor-based will be more suitable)
@@11r3start11 The built in TTL is super handy, it's fast because it's in-memory and beyond key-value pairs and some simple hashes there are no complex relations. Not sure what you mean by misuse
actually Theo has a video answering my question on stream about the exact same topic (when edge was just introduced).
Theo said when he recommends using the edge, he talks about the RUNTIME. not the location. because of the same issue you figured out by yourself.
you can config the Vercel's edge to only run in specific region (which you'll want next to your DB), but still use the good and fast runtime of edge.
here's the video I'm referencing: ua-cam.com/video/UPo_Xahee1g/v-deo.html
(I'm so hyped about it because it was the first time he noticed me on stream 😆)
great job btw figuring it out on your own
can you make a complete next-auth tutorial video on basic to advanced level..
Nice Can u please make a video hoe to handle cache and invalidate cacahe in large relational database and how to setup keys with prisma and redis please
How can I do that in mongoose🤓
But 200+ms is still slow tho..
What's your point?
depends on the calculations you do in the api route
Imagine Prisma had that
is this a course or you building your own website/project
wait, u're gonna tell me, instead of making a sequential requests, that parallel will be faster? NO WAAAAAAAAAAAAY
You looks like foden football player
first it was kevin de brunye and now this
I am lazy to watch video full but I want to increase api speed
Stop using a baby language and use Rust or C++ and you have a 100/200% speed increase. Us system developers are like 250ms pffff kill off the API guys this adds too much overhead.
🤡
@@joshtriedcoding Yeah it always humors me when I hear JavaScript and Python devs talk about performance, when their initial choice to use those languages for a backend should be at the very least eyebrow raising.
And they are ugly large bloated languages. I like small lean and mean languages, they are also more robust
This could have been a short. You just batches requests together using redis-pipeline. It's not a trick but a common occurrence in all products. Disappointed with the clickbait 😞
no