this is a brilliant video and a very interesting paas, open source, all the things end of a long week for me and just found your channel, like finding a gem cheers !
Please man... Thanks so much for this.. I have a node back end.. how do I deploy to Caprover, without running into this 502 NGINX error.. I've been stuck on this part for the past 2 days.
Can you make a guide around hosting supabase on caprover? I couldnt manage to do it. Caprover uses their own definition file which is not the same as the docker-compose supabase is using.
@@MrDevianceh Yeah true. The captain definition file unfortunately doesn't support Multi-Containers although at the end of the Day CapRover itself does use Multi-Containers for its Recipes. I've written your suggestion down
There are 2. The easiest one is using the Github Webhooks. I mentioned this in the Video. You need to give your E-Mail and a Deploy / Access Token as the password and then CapRover gives you a link which you can add to GitHub as a Webhook. The alternative solution would be using the CapRover CLI somehow inside of your GitHub CI yaml file (haven't tried that though and I think the first solution with the Webhook is easy and does the job)
I am writing here because others may be interested. I have a problem with Astro. I am using the new caching from Astro. During the build process, the data is cached in a folder. I would like to reuse this in the next build process. So I need a persistent app. But I don't understand how I can access a persistent folder or if it works at all. My Config: Path in App: /astro-cache Label: astro-cache In my Astro config I enter the following as the caching path: cacheDir: './astro-cache' That won't be correct, but I don't know how else to access the path. Is it even possible?
As discussed previously, it's hard to access a VOLUME within a running build process. The best bet would be copying the files into the build context directory which comes with several problems: 1. You'd need to know where CapRover does the `docker build` 2. You'd need to copy the existing cache directory into that before the build starts and then do `COPY /cache-dir ...` within that 3. Where you get that `/cache-dir` from in the first place? The third question can be answered e.g. by having a git pipeline that will first trigger some script which will run `npm run build` and it's result is put to the cache-dir where it then later can be read from (2). But then again you still face problem (1) that you need to hook into it. That's why you're probably better of building in the container instead of the dockerfile and using a HEALTHCHECK command in the dockerfile to determine the healthiness of a container depending if a certain URL can be accessed or not (CURL request). Another idea: Use the Repository pipeline / GitHub actions to build on GitHub and re-use cache directories from the build which you then use to push to a specific branch when succesfull (like a `deployment` branch). That branch can then be used by CapRover. But all of these things aren't natively supported so if you didn't want workarounds you might wanna consider portainer instead which is a bit more versatile and I think it has Webhooks as well.
@@ChristianKolbow docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows also check this one to grasp ideas for prebuilding the cache
this is a brilliant video and a very interesting paas, open source, all the things
end of a long week for me and just found your channel, like finding a gem
cheers !
Thank you so much! Have a beautigul Weekend my friend!
Hit me up: What's your favorite way to deploy?
before i see this: vps or GitHub pages
woow duud thanks !♥
Glad you liked it.
Please man... Thanks so much for this.. I have a node back end.. how do I deploy to Caprover, without running into this 502 NGINX error..
I've been stuck on this part for the past 2 days.
Can you give me more information? What did you do? When does the error appear?
And does CapRover works with ssr adapter like node with Astro?
Yeah sure, that should work :)
That meat i`m can develop app like Astro Js with Supabase for backend and deploy this on Internet with CapRover and take SSL, custom domain and etc?)
Yes!
Can you make a guide around hosting supabase on caprover? I couldnt manage to do it. Caprover uses their own definition file which is not the same as the docker-compose supabase is using.
@@MrDevianceh Yeah true. The captain definition file unfortunately doesn't support Multi-Containers although at the end of the Day CapRover itself does use Multi-Containers for its Recipes. I've written your suggestion down
Is there any possibility to deploy it from Github directly ? Thanks
There are 2. The easiest one is using the Github Webhooks. I mentioned this in the Video. You need to give your E-Mail and a Deploy / Access Token as the password and then CapRover gives you a link which you can add to GitHub as a Webhook.
The alternative solution would be using the CapRover CLI somehow inside of your GitHub CI yaml file (haven't tried that though and I think the first solution with the Webhook is easy and does the job)
docker inside docker? who uses macos for a server?
It's for local/dev purposes. For anything else one would obviously choose linux and skip the DIND approach :)
I am writing here because others may be interested.
I have a problem with Astro. I am using the new caching from Astro. During the build process, the data is cached in a folder. I would like to reuse this in the next build process. So I need a persistent app. But I don't understand how I can access a persistent folder or if it works at all.
My Config:
Path in App: /astro-cache
Label: astro-cache
In my Astro config I enter the following as the caching path:
cacheDir: './astro-cache'
That won't be correct, but I don't know how else to access the path. Is it even possible?
If it is possible, I would like to specify the persistent directory as cacheDir in my astro config.
As discussed previously, it's hard to access a VOLUME within a running build process. The best bet would be copying the files into the build context directory which comes with several problems:
1. You'd need to know where CapRover does the `docker build`
2. You'd need to copy the existing cache directory into that before the build starts and then do `COPY /cache-dir ...` within that
3. Where you get that `/cache-dir` from in the first place?
The third question can be answered e.g. by having a git pipeline that will first trigger some script which will run `npm run build` and it's result is put to the cache-dir where it then later can be read from (2). But then again you still face problem (1) that you need to hook into it.
That's why you're probably better of building in the container instead of the dockerfile and using a HEALTHCHECK command in the dockerfile to determine the healthiness of a container depending if a certain URL can be accessed or not (CURL request).
Another idea: Use the Repository pipeline / GitHub actions to build on GitHub and re-use cache directories from the build which you then use to push to a specific branch when succesfull (like a `deployment` branch). That branch can then be used by CapRover.
But all of these things aren't natively supported so if you didn't want workarounds you might wanna consider portainer instead which is a bit more versatile and I think it has Webhooks as well.
@@ChristianKolbow docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows also check this one to grasp ideas for prebuilding the cache