I have used BFF in every Nuxt project that I've worked on. I love it how it gives us frontend devs control over the backend. It also gives us freedom to use whatever backend we want.
Also, BFF is useful to proxy your API, in case they should not be accessed directly due to ENV stored keys and secret. But yea, been using BFF since 1st project, neve knew that's what is called.
I use bff to get data from user that is autenticated by a cookie that has a token and only can be retrieved by server. Because of that i want to know if i can use a request direct in Island to not create a new file on Api just for that. My url request have slugs and in Island i cant get the slugs, when using useRoute.params is empty there. Any tips?
I can't believe these videos are available for free! Unfortunately, in a lot of Nuxt tutorials, you see people basically just going through the docs, and I don't learn anything new. But you come up with a broad variety of topics!
In our case we handle the number of returned data to FE via transformer and based on payload e.g. if FE wanted certain fields, FE can use the fields payload, or conditional approach e.g. pages: employee, profile-page which it then used as condition for transformer in the BE, that way, the FE has ton of options without realliying heavily from BE team.
I literally had this exact issue on Friday. I have access to an endpoint that has all the data I need but I want to separate it out to only fetch specific data per page. Now I can do all of that without bugging my busy back-end developer. Nuxt never stops surprising me with how good it is!
Thanks for the amazing video! I have a two questions: - If my API endpoint returns all the information possible because it's a general endpoint designed to be used with multiple clients, even if I use BFF, that doesn't mean the speed of the page will increase because the endpoint will be slower due to the need of fetching more data from the database, then transforming it and transfering it via the network. So, how do we improve the speed via BFF? - If the data changes quite often, caching won't be possible, so we can't benefit from it. Maybe a different caching strategy must be applied? Thank you!
You are welcome! 1) It can still be faster if you actually cache the data (if possible). Otherwise, there might be a slight overhead because you do another request, yes. You can then decide to simply proxy your API endpoint (ua-cam.com/video/J4E5uYz5AY8/v-deo.html) 2) You might be able to partially cache data then, e.g. the one not changing, or use a lower cache duration. If the data isn't cacheable at all (e.g. changing too fast), then you might not be able to cache the data if you need it in "real time".
First of all thanks for the content. Considering the best practice url proxy method which uses a catch-all route in server/api/ path, how does these 2 play together? Also since a catch-all server route can transform all outgoing requests, is moving general logic like authentication headers to this same route viable?
You used useStorage, and it is a composable. I've learned from another video of yours that a function that uses a composable is a composable itself, and should have a prefix 'use' and should be put in the composables/ directory instead of utils/. So my question is, is this an exception to the rule (and which rule) and why? Or is this just for ease?
Thanks for the comment! It is important that useStorage is not a "Vue composable". It can *only* be used in the server/ folder (so Nitro context). There, the rules for Vue composables don't apply - everything will run in the "Nitro context"/"server context", so we don't necessarily need a distinction between utils and composabels like we do on the client side.
Hey Alexander, i have a couple of nuxt projects and i want to make a BFF from one nuxt app and use it in the other nuxt apps, but there is a problem... i lose the type safety in the other apps, i have to manually create the types... is there some way i can import the types in the other nuxt projects? or is there a plan to implement something like codegen/openapi for the types? having the types in useFetch in other projects would be ideal! any help with this is much appreciated! Thanks!
i can't find an example of how to add the query or params to the description for the open api. can you provide a simple snippet how can i implement for example pagination queries and params? for example if im on the route api/posts?limit=10&skip=10 , how can i make skip and limit show up in open api? also for example if im on route api/post/[id], how can i add the id param?
I'll look but my first question is can the utils folder have nested folders to organize my api's and still benefit from auto-imports in the server andpoints??
Very informative. Lovely demonstration, thanks Alex for this video. Do you know how this would work if the backend server returns an SSE? how do I stream that to the frontend from nitro?
To make sure reactivity is not lost. I could've also used a computed property instead. More info on the topic - ua-cam.com/video/sccsXulqMX8/v-deo.html
Nice one Alex. Curious to understand how Serverless Functions fit in this pattern. Do BFF and Serverless Functions take fundamentally different approaches to backend architecture though they can complement each other?
Serverless functions are just an alternative way of compute and can be used with most patterns. They can be used as the backend, a dedicated microservice or a complete BFF style API gateway. It's more about data persistence, latency and other limitations of serverless compute that makes you decide to use them or not.
pnpm is the defakto default package manager for modern JS projects. If you are very legacy, you use npm, if you are somewhat legacy, but want monorepo functions and better performance, you use yarn, if you want to use a modern package manager that is performant and saves disk and memory space, you use pnpm and if you are a little bit hipster, you go with bun. But don't confuse it for the runtime, bun for example is multi purpose.
Yes, if... * ...you have control over your backend (think of third party API) * ...you are fine with extra communication loops between BE and FE * ...you want specialized BE routes for one application
У меня вопрос. В документации Nitro параметр minify стоит в false. В Nuxt он в каком состоянии находится? Если в false, то значить для сжатия файлов его нужно включать в nuxt.config.ts. Верно?
I have used BFF in every Nuxt project that I've worked on. I love it how it gives us frontend devs control over the backend. It also gives us freedom to use whatever backend we want.
Also, BFF is useful to proxy your API, in case they should not be accessed directly due to ENV stored keys and secret.
But yea, been using BFF since 1st project, neve knew that's what is called.
Yes, absolutely!
Do you use the BFF pattern? Share your cases here! 🙌
I use bff to get data from user that is autenticated by a cookie that has a token and only can be retrieved by server.
Because of that i want to know if i can use a request direct in Island to not create a new file on Api just for that.
My url request have slugs and in Island i cant get the slugs, when using useRoute.params is empty there. Any tips?
O.K. I've been doing this in every app and had no idea, that this pattern is called BFF. Another skill to add to my resume 🤣
I can't believe these videos are available for free! Unfortunately, in a lot of Nuxt tutorials, you see people basically just going through the docs, and I don't learn anything new. But you come up with a broad variety of topics!
In our case we handle the number of returned data to FE via transformer and based on payload
e.g. if FE wanted certain fields, FE can use the fields payload, or conditional approach e.g. pages: employee, profile-page which it then used as condition for transformer in the BE, that way, the FE has ton of options without realliying heavily from BE team.
Just put a GraphQL API gateway in the middle, it's made exactly for this use case and is the most mature way to do a BFF pattern with fragments etc..
Totally agree, I was going to write the same comment
I've been doing that with mongo's aggregate functionality without even knowing that's BFF. Cute.
I literally had this exact issue on Friday. I have access to an endpoint that has all the data I need but I want to separate it out to only fetch specific data per page. Now I can do all of that without bugging my busy back-end developer. Nuxt never stops surprising me with how good it is!
Thanks for this valuable content. This channel is gold for Nuxt developers.
Glad you think the content is valuable 🙏
That's the goal ✨🙌
Great stuff, like usual 👍
Thank you 🙏🏻
awesome thanks again! Have you made any video about protecting API routes in Nuxt?
Not yet, but on the list! Thanks for the suggestion 👌
Laravel Mentioned.
Checks out ✅
What Tool are you using there for the diagrams? Looks neat
Great video Alexander! Is there a way to check your vscode setup, extensions and settings?
Will put up a /uses soon!
Interesting, thankss!!
You are welcome 😊
thanks ❤
You're welcome 😊
Thanks for the amazing video! I have a two questions:
- If my API endpoint returns all the information possible because it's a general endpoint designed to be used with multiple clients, even if I use BFF, that doesn't mean the speed of the page will increase because the endpoint will be slower due to the need of fetching more data from the database, then transforming it and transfering it via the network. So, how do we improve the speed via BFF?
- If the data changes quite often, caching won't be possible, so we can't benefit from it. Maybe a different caching strategy must be applied?
Thank you!
You are welcome!
1) It can still be faster if you actually cache the data (if possible). Otherwise, there might be a slight overhead because you do another request, yes. You can then decide to simply proxy your API endpoint (ua-cam.com/video/J4E5uYz5AY8/v-deo.html)
2) You might be able to partially cache data then, e.g. the one not changing, or use a lower cache duration. If the data isn't cacheable at all (e.g. changing too fast), then you might not be able to cache the data if you need it in "real time".
@@TheAlexLichter Thank you very much. The partial cache is a good idea. I didn't think about that. And thak you very much again for the tutorial!
@handmasters sure thing! Thanks for your question 🙏🏻
First of all thanks for the content. Considering the best practice url proxy method which uses a catch-all route in server/api/ path, how does these 2 play together? Also since a catch-all server route can transform all outgoing requests, is moving general logic like authentication headers to this same route viable?
Immediately thinking about GraphQL 😹
You used useStorage, and it is a composable. I've learned from another video of yours that a function that uses a composable is a composable itself, and should have a prefix 'use' and should be put in the composables/ directory instead of utils/.
So my question is, is this an exception to the rule (and which rule) and why? Or is this just for ease?
Thanks for the comment! It is important that useStorage is not a "Vue composable". It can *only* be used in the server/ folder (so Nitro context).
There, the rules for Vue composables don't apply - everything will run in the "Nitro context"/"server context", so we don't necessarily need a distinction between utils and composabels like we do on the client side.
JUST USE JS FOR EVERYTHING BUTTON
*TypeScript enters the room* 👀
Hey Alexander, i have a couple of nuxt projects and i want to make a BFF from one nuxt app and use it in the other nuxt apps, but there is a problem... i lose the type safety in the other apps, i have to manually create the types... is there some way i can import the types in the other nuxt projects? or is there a plan to implement something like codegen/openapi for the types?
having the types in useFetch in other projects would be ideal!
any help with this is much appreciated! Thanks!
If it is a monorepo, it should be straightforward.
If there are different repos, you can use Nitro's OpenAPI option to generate the types 👌🏻
@@TheAlexLichter where can i find the docs for the Nitro's OpenAPI option?
@ivanangelkoski nitro.unjs.io/config#openapi
@@TheAlexLichter Thank You Sir!
i can't find an example of how to add the query or params to the description for the open api. can you provide a simple snippet how can i implement for example pagination queries and params? for example if im on the route api/posts?limit=10&skip=10 , how can i make skip and limit show up in open api? also for example if im on route api/post/[id], how can i add the id param?
We've been using BFF for quite some time now (via Lambdas) but recently we've started migrating them into our Nuxt project using /api/. WE LOVE IT.
🔥🔥🔥
I'll look but my first question is can the utils folder have nested folders to organize my api's and still benefit from auto-imports in the server andpoints??
You'd have to add them to your auto imports then but yes, that works
See ua-cam.com/video/FT2LQJ2NvVI/v-deo.html
Hey Alexander, How to combine this with the proxyRequest as explained in the other video? I cannot seem to intercept the returning data
I'd just fetch the data with $fetch then 👌
Very informative. Lovely demonstration, thanks Alex for this video. Do you know how this would work if the backend server returns an SSE? how do I stream that to the frontend from nitro?
At 22:26 why did you turn this into an arrow function?
To make sure reactivity is not lost. I could've also used a computed property instead.
More info on the topic - ua-cam.com/video/sccsXulqMX8/v-deo.html
Nice one Alex. Curious to understand how Serverless Functions fit in this pattern. Do BFF and Serverless Functions take fundamentally different approaches to backend architecture though they can complement each other?
Serverless functions are just an alternative way of compute and can be used with most patterns. They can be used as the backend, a dedicated microservice or a complete BFF style API gateway.
It's more about data persistence, latency and other limitations of serverless compute that makes you decide to use them or not.
@@gro967thanks for sharing your thoughts 👍🏽 Im going to explore serverless fns shortly, so was curious!
GraphQL?
In nextjs, this called router handler
Does it have any app's performance pitfalls?
It has a little overhead *but* this is commonly outperformed by the benefits such as caching.
Haha nice we do this as well :)
Sweet! 👌
How do you have this interface in the browser when you enter an api url ?
Firefox does that by default when it recognizes JSON 🎉
@@TheAlexLichter Thanks a lot for your videos as always.
Of course! You are welcome 😊
Sir why are you using pnpm can you explain me and what are the main role has play pnpm in nuxt
pnpm is the defakto default package manager for modern JS projects. If you are very legacy, you use npm, if you are somewhat legacy, but want monorepo functions and better performance, you use yarn, if you want to use a modern package manager that is performant and saves disk and memory space, you use pnpm and if you are a little bit hipster, you go with bun.
But don't confuse it for the runtime, bun for example is multi purpose.
@@gro967 can you please suggest me any website or channels for deep understanding this packages
or we can just respond less data within our backend ✅
Yes, if...
* ...you have control over your backend (think of third party API)
* ...you are fine with extra communication loops between BE and FE
* ...you want specialized BE routes for one application
waiting...
У меня вопрос. В документации Nitro параметр minify стоит в false. В Nuxt он в каком состоянии находится?
Если в false, то значить для сжатия файлов его нужно включать в nuxt.config.ts. Верно?
Думаю, стоит продублировать вопрос на английском, а то мало ли...
Верно
У тебя в конфиге nuxt, есть поле nitro, которое отвечает за его конфиг. Там можешь это настроить
@@Максим-в3ф6о Уже выяснил. Нужно прописывать.
Minify is true in production for e.g. CF workers! Great to disable for debugging but otherwise a sane default 👍🏻