A Better Way to Write APIs?
Вставка
- Опубліковано 1 січ 2025
- Turns out a method i found out that seemed pretty hacky to me at first because it messes with framework defaults actually works quite well. It's pretty fast, cheap, and easy to set up. Honestly there might be bugs that I haven't found yet, so please dont use this in production yet lol
-- sources for this video
hono: hono.dev/
cloudflare workers: workers.cloudf...
-- my links
saas: www.animate-co...
newletter: www.joshtriedc...
discord: / discord
github: github.com/jos...
I’m working on a project with a client using CF worker, CF worker for platform, hono and solid js and trust me things are super fast. We are talking about single digit millisecond performance. So yeah this works
From where the difference in speed is coming from then ? If both are hosted on CF anyway
honestly no idea, I'd like to know as well
I guess that might be vercel overhead like: logging, analytics, routing, reading config, etc. But I'm not totally sure.
@@joshtriedcoding vercel has a ton of logging and middleware and other built in utilities that it needs to step through is my presumption.
Vercel api route use as serverless function and it's not fast in comparison to cloudflare workers
Aws Lambda vs CF worker.. V8 vs Node
I really like Hono + Clouflare Workers. Also D1 works fine too. I sometimes use Turso or D1
Can you tell how to access env vars in hono? Outside ant .get methods?
@@yourlinuxguyyou can't access env variables in top level of your module. If you want to use any library that requires env variables. You should use hono's vars middleware if you are using hono.
Like this:
const app = new Hono();
app.use(async (c, next) => {
c.set("myDb", new DB(c.env.MY_ENV));
await next();
});
app.get("/", (c) => {
const db = c.get("myDb");
return c.json({...});
});
@@yourlinuxguyif you find out ,let me know
@@deadpeddler you can't access .env outside the .get methods
@@yourlinuxguy you can't
FAST_MODE: true is hilarious.
hahaha cheers man
Great! But the problem with edge or regional runtimes like CF Workers is when the distance between the worker (backend) and its other sources matters.
If the regional backend needs to fetch multiple times from a far away source (e.g. database on US-East), it will be worst. If it's one call, it will be the same time overall.
client backend database [backend is near the client]
client backend database [backend is near database]
This shines when the database read replica is regional too, like Turso
Nice insight.
For real world apps with database, the latency can be worse twice as much.
If the database is located at the same location of cloudflare, fetch from Database will be fast, but that won't be the case since cloudflare workers are global.
So once the request reaches cloudflare, another request has to go to the database. And if the request issues multiple DB queries, there will multiple requests to DB. Eg find user and then find orders.
So in case of requests requiring data from DB, better to colocate the server and DB in same region
Cloudflare launched D1 and KV which are respectively a Dicentralized SqlLite Db and an On Memory Key Value Database. And theoraticly when you hit a Cloudflare worker it should communicate with the closest replicat of the dbs.
This is amazing guide, watch step by step while integrating hono in my nextjs app. Thanks alot
Vercel has something called “regional edge” where they deploy your backend to a cloud flare worker localized to a region like east US or whatever. Normal cloudflare does not do that. I don’t know if your vercel deployment was using regional edge, but if it was, that would explain the speed difference. Regional edge allows you to make multiple DB queries without the downside of the distance cost from a distributed edge to a centralized db. I think you can test it by having the backend do like 3 queries to a database. If Vercel is still slow in that test, then this is an easy cloud flare W. Cool video king, I love high effort stuff like this.
interesting, I wonder if the regional edge thing could be the reason for the difference or if its also the vercel tracking, logging etc. Cheers man!
true. it might be the deployment region or the proprietary layers of logging and tracking. Cheers back!@@joshtriedcoding
I am waiting for their latest offering, the ultra pro edge, where the code to be executed will be even closer than the edge, it will be on the users device.😎
standard AWS feature
? not its not wdym@@adamconrad5249
Saying it’s double the performance is a bit of a stretch, in a basic example such as yours you’re saving the 10ms overhead that Vercel requires (understandably- Logging, Billing, Auth, etc etc).
The real question is how does it compare in a real world use case, my bet is it will remain in the 10ms range, and definitely not “double”.
Great vid though ❤
yeah just like you mention, i also wonder if hono/CF workers will actually take half as long, or just within ±10ms compared to Vercel Edge network
i haven't tested this on a large scale with more complex API calls, curious what y'all think about this
in my tests (production environment) I got about a ~30% consistent speed difference for db interactions in both API routes. Decided to leave it out cause that part was just boring, probably wouldve been a good idea to at least include the results. Fair point, appreciate ya
"yeet" as a dummy commit message. I feel so seen right now.
I think you can just add FastMode during built time. but conditionally exporting "edge". Adding the hono, only helps in treating the backend code just like a separate server.
Discovered Hono while back, So excited that i could not sleep. Tell me Hono.js vs Elysia.JS which
Elysia ❤
both are kinda goated
Do a video building out backend in bun. My curl responses always smoke my command prompt and break the screen 😂. Elysia and Hono both great frameworks but I’m leaning towards Elysia.
Can I not just host my NextJs app on Cloudflare pages? As far as I know, all your serverless functions are deployed as workers this way
That was my first thought because thats where Ive been hosting mine.
What about nextjs middleware on CF workers? Should work too, no? Maybe a test with typical middleware like clerk or next-intil. If that stuff runs on CF might be some serious Speed gains 🔥
Nice catch! Can you please share the code if possible? I would like to see how it is structured and connected.
Try K6 performance testing for better statistics
3 years into a project and I wish I didn't use this paradigm. Over time Ive been forced to start defining groups of related routes in their own files as the main.go file serving the main web server has grown to be over 23 thousand lines of code. Its not nearly as practical to edit or make changes to something so large with the latency the language server takes alone, compared to defining routes in their own files. Plus with golang there is no performance penalty for spreading the same module/package/app across multiple files because they are all compiled into one binary before the servers executed.
Do you think cloudflare/next-on-pages gives a similar configuration and speed result? might be another comparison to do?
How to deploy the frontend seperately too? I just got to know about this 😅, searched it up couldn't find it. Could you pls tell
if its static you can port it to pretty much anywere, even if not most places support next builds out of the box like railway, amplify or netlify
So the only difference is switching the backend host? Wow, that's really a valuable insight
The channel is called Josh tried coding, not that he succeeded.
why im getting Error: You cannot define a route with the same specificity as a optional catch-all route ("/" and "/[[...route]]"). ? i follow ur video instructions :(
might be a problem with the pages vs app router, if you have both at the same time (hono initialized with pages by default, I switched to app manually), this can happen
@@joshtriedcodingi use app router only and never done pages router
Per the hono docs, move /app/[[...route]] to /app/api/[[...route]]. That fixed it for me
It certainly seems like a nicer way than using route.ts files, even on small projects, that doesn't seem appropriate, yet alone something decent.
youre channel is a dream come true. Where do you even find these lol
I don't understand why I see so much hono, while nobody mentions h3. H3 works everywhere with everything, while having 10 times the usage.
My guess would be more familiar API (feels like express)+ they add some interesting features
h3 and nitro are both great to work with
Immediately noticed "eventHandler" bloat. Otherwise looks identical.
Looks like express
Spiceflow is also good
I tested hono js vs server action , with prisma and next js 14.2, it's just the same, note: don't use the runtime='edge' every where, it gonna make some bugs, who try drizzle?
What about deploying it to normal vps? Cheap, performant and without cold starts ever
How to fix the build error, wherein the error shows if the APIs that are being fetch are from the NextJS API instead of external API?
I think with this approach you are calling the nextjs route from a server component using fetch which gives some overhead. But I guess the App Router was invented to avoid these unnecessary fetches. So maybe a client component would be a better comparison
Can it replace tRPC or it does not provide the same e2e typesafe story? How about validation/error handling? Thanks for pointing the Vercel overhead.
Great timing Josh! Thank you for this useful video.
Please I have a couple of rooky questions:
- CF Workers can entirely replace backend part of a web app? because in the pricing panel I see "Up to 10ms CPU time per request" in free tier, so makes me wonder if is only for very light request / payload or if can handle PDF creations or batch database transactions..
- I'm just getting familiar using Next.js for the API / backend part, can I reuse the endpoints generated by Vercel or CF workers to write lets say a native mobile app? It generates a URL that can hit with headers auth or so?
Thank you so much in advance!
hi! Technically you can reuse the APIs, practically you'd normally use a separate backend. At my last job we used fastify as the backend that our iOS, android and web apps could all access and it worked great
@@joshtriedcoding Thank you for the insight 🙌
10ms cpu time is not low. Its more than enough. But i dont think you can run pdf generations on cf workers.
@@twitchizle thank you mate, definitely will give it a try and compare between workers and AWS lambdas
Wtf is export default as never??
From my understanding, it’s telling typescript “I need to export this file, but I should never use/import it anywhere else”
what was I saw, a new tcp connection being established to the database on every request, it is PHP all over again lmao
bro can you elaborate more on this !!!!....
@@farjulmallik4135 He is calling the function that establishes the connection inside the route handler which means a connection is being established on every request made to the route handler, this was mainly due to not being able to access the environment variable and not a runtime limitation, but this emulates what PHP does where it is completely stateless and the concept of database connection pooling or long lived connections do not exist because on every request php restarts its process. Most backends nowadays have connection pools to alleviate the issue of connection reestablishment. Not having pools or not having a long lived connection may not be the main concern of users of such a system, but there is a stark difference in terms of database IO performance between backends using a connection pool/long lived connection and those which do not.
yep, so now it's JS turn to solve those issues again :)
Pgbouncer
how did you managed avoiding ; (semi-colon) at the end code?
It's called automatic semicolon insertion. Very few people learn about it, so it ends up being confusing. It is nothing more than a toxic flex, don't use it.
Because what for?
I've watched this video multiple times. Never gets old!
This was released yesterday bruh
@@tvnishq lol
Is it possible with trpc?
are u not censorring env vars? in the minute 4:27 i can see the user and pass of ur database is
i just rotate passwords or delete databases before uploading
Another awesome video 🔥
Does this work with preview deployments?
what it means "run at the edge" i'm not english natural, so i didn't get it.
Hosted at a CDN edge location. Closer to the user, so lower latency.
Wondering about hono vs elysia aswell
I prefer Elysia, but Elysia has some "Bun only" stuff, that doesn't work on Workers.
Is this associated to the server? or to hono / next?
Does this work with nextAuth v4?
How big of a script, size wise, can you host on CF workers anyway?
up to 1mb for free
nextjs backend? emm.. dotnet backend, yes!!!
What is wrong with using Next.js as a backend framework? Isn't it nice that you can stay in the same language and framework and have it deployed the same way?
@@JakobTheCoder i think op is joking
@@JakobTheCoder If you are frontend developer, and only use it to connect to a single frontend.
If you want to do more with your backend, Next.js is not a good choice.
It is a backend for frontend, not a backend to create a generic API.
Also if you want to build more workers / microservices, or want more freedom and not have vendor locking, Next.js is not what you want.
So Next.js is great as frontend developer, and only for a single frontend with a single backend.
@@JakobTheCoder yes, great for small-medium size applications, but JavaScript's performance is not enough for large/perf centric applications.
This is where C#, Java, Rust, and other languages shines!!
@@sanampakuwal There are good points to make for and against using JS / C# etc. on the backend, this is a great discussion
hi josh i wanted to know how can we use prisma on cloudflare worker.
Use Drizzle instead
Nice, I am using it and is amazing, fast and so simple :)
Even backend from Node Js is too vast and heavy for any software services
Using Edge is not fundamentally writing faster api....
or just, saying, pick another language
Cloudflare workers are fast because these are running closer to users location. Its not easy to build edge functions in a language other than JS.
If you are running in the fastest language but still serving from one centralized location, that will be slower because of network latency
can you make it work with trcp ?
yes, there is trpc middleware for Hono
You can make anything with with trpc
It seems Vercel just gave up on Edge, this won't be an option anymore. At least not with Vercel.
ua-cam.com/video/lAGE-k1Zfrg/v-deo.html
The fastest api is a "no code" api.
Josh, my brother, I really need your help with my backend architecture. shouldnt take that much amount of time for you but if you could help i will be really greatefull to you
what is the name of the font?
🎉🎉🎉
Banger
super cool!
I'll still prolly use express or rust for backend, but knowing this is pretty good for small scale applications that I don't want to spend much time on 👀
"how to make your api faster" use a faster service!
- duh
Stop deploying js backends 😂
prisma do not work
That's because you don't use Drizzle
Nah thanks
what is this? 😅
Please heart ❤
for sure man
hey. You have 120K subscribers. you need to work up the game.
(we all know this video was not up to the mark .)
btw, vercel also supports edge functions.
Last
15ms vs 30ms nobody cares
First
stop making from nextjs freaking backend -_-
What ur doing is not programming. Children Playing coders 🤷♂️