How to build a three-tier serverless Cloud Run app

Поділитися
Вставка
  • Опубліковано 8 вер 2024

КОМЕНТАРІ • 36

  • @googlecloudtech
    @googlecloudtech  2 роки тому +2

    Are you going to build a three-tier serverless Cloud Run app? Don’t forget to subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech

  • @nw_rye
    @nw_rye Рік тому +4

    Great video! Next show a three tier app on cloud run for internal/enterprise users who don’t want their frontend publicly available, for example with IAP but in a way that works well with both tiers.

    • @TheMomander
      @TheMomander Рік тому +2

      Excellent suggestion; thank you! I'm adding it to the list of future episodes.

    • @TheMomander
      @TheMomander Рік тому +1

      By the way, the Cloud Run + IAP integration has launched. See the video titled "Cloud Run user auth for internal apps" that was released recently.

  • @muncho404
    @muncho404 2 роки тому +1

    Google keeps us Motivated🍉

  • @sumitbhowmick1094
    @sumitbhowmick1094 2 роки тому +2

    Wow! great video.almost all components where cover which we use for day to day cloud application.hope to see many more similar videoes.

  • @piotrzakrzewski2913
    @piotrzakrzewski2913 2 роки тому +3

    What is the benefit of separating fe and api actually? There is certainly cost: how do you roll out breaking changes to the api? Easier with just one cloud run deployment!

    • @jsalsman
      @jsalsman 2 роки тому +2

      Really, why bother with a separate Run container just for static content? Use a storage bucket for no latency and CDN features. These ultra-short tutorial videos sometimes get their examples simplified below the ability to discern their practical motivations, but not often.

    • @TheMomander
      @TheMomander 2 роки тому +3

      Good question! One reason to split up an application into front-end and API is that the former tends to change more often. Any time there is a tweak to the user interface, you can then deploy that without touching the API. In Cloud Run you pay for CPU and memory time while a request is being handled. There is no fixed fee per deployed container.
      Your point about how to deploy breaking changes is really interesting. It's complicated. In our experience the API back-end has to be able to handle requests from both old and new clients, no matter how you deploy. Some users will have opened the webpage before you deploy a new version of the back-end and will keep using it after your deployment. One way to deal with this is to include version number in the REST API path for breaking changes, like Twitter does: "/1/to-do", "/2/to-do", etc.

    • @TheMomander
      @TheMomander 2 роки тому +2

      @@jsalsman Good point! Another option is to deploy the static HTML/JS/CSS files to Firebase Hosting, which comes with a built-in CDN.
      But some organizations we've worked with prefer to have everything in containers. That way if there is a problem with the front-end, an Ops team member can switch traffic back to the previous version of the container in the registry with a few clicks. Same thing if there is a problem with the back-end. They find it easier to shift traffic like this than to rebuild the code and redeploy it. This is especially true if the front-end and back-end are hosted on different Google Cloud products so the rollback procedures are different.

    • @jsalsman
      @jsalsman 2 роки тому +1

      @@TheMomander I don't know, something doesn't seem right about the example as presented in such abstract form. Can you tell if a more common pattern is for the database server, or some process on it whether the actual database or a support script, to spawn Run containers from its internal IP address, than the configuration shown in this video? I'm guessing it may be, as I have near zero familiarity with golang network service patterns,

    • @jsalsman
      @jsalsman 2 роки тому +1

      @@TheMomander ... . P.S. The example in this video would work for a proxy revamp of a legacy servce. That would make the example more accessible.

  • @joneskiller8
    @joneskiller8 2 роки тому +2

    Need complete deployment demonstration

    • @TheMomander
      @TheMomander 2 роки тому

      Good point! You can deploy it yourself if you go to the "Deploy Stack Todo" link in the video description.

  • @Raza_9798
    @Raza_9798 2 роки тому +2

    Happy to comment as 1st on Google

  • @SandeepKongathi
    @SandeepKongathi 2 роки тому +2

    This is an interesting take to serve static content from cloud run, I thought firebase was an easy approach,

    • @TheMomander
      @TheMomander 2 роки тому +2

      Agreed, Firebase Hosting is great for hosting static content. I use it all the time. But some organizations prefer to have everything in containers, for easy versioning, audit, and rollback.

    • @SandeepKongathi
      @SandeepKongathi 2 роки тому +1

      @@TheMomander firebase had managed CDN, 8 guess we need to configure CDN, firewall explicitly

    • @TheMomander
      @TheMomander 2 роки тому +1

      @@SandeepKongathi Correct, Firebase Hosting includes a CDN. That's one of the reasons I use it for applications with a global audience. But if your application is used mostly in one country, a CDN isn't really needed. Just deploy your Cloud Run service in a region close to your users.
      Firewalls aren't necessary for serverless platforms like Cloud Run. Google only accepts HTTPS connections on behalf of your container. Google then translates everything to a single port that your container can listen to. No-one can hit your container on a non-standard port. Life is easy when you go serverless 🙂

  • @anandhukraju9382
    @anandhukraju9382 2 роки тому +2

    Would like to know how to spin up cloud run in a vpc in this video and then only allow http requests from the public frontend cloud run service

    • @TheMomander
      @TheMomander Рік тому +2

      Good question! First be aware that approach won't work in this particular application. The Javascript in the web client actually makes API calls to the Cloud Run service called "Middleware" in the diagram.
      But let's say you have a front-end service that accepts calls from Javascript web clients, and that then makes calls to a back-end Cloud Run service. Here is how to make sure the back-end Cloud Run service is not reachable from the public Internet.
      First set the Ingress for the back-end service to be "Internal only". Then route the calls from the front-end service through a Serverless VPC Connector. That makes those calls internal so they can still reach the back-end service. For more details, search for the video "Cloud Run: Concepts of Networking". Best of luck with your project!

    • @anandhukraju9382
      @anandhukraju9382 Рік тому +2

      @@TheMomander Thank you Martin for the reply and all the videos :)

  • @onemanops
    @onemanops 2 роки тому +1

    Thank you for this info, I can really use clouyd run and I will use secrets manager to store my api keys,. would it be too much to ask if you would do a video on how to use firebase/firestore for authentication and authorization using this setup? I have firestore setup for auth but it only allows users in my domain to join, I want the public to join. So far this is the only thing stopping me from deploying my website, is the user auth part, thank you

    • @TheMomander
      @TheMomander 2 роки тому

      Good idea -- we will record a video about that!
      In the meantime, are you using regular Firebase Authentication? I haven't seen a case like yours before. There is a video called "Getting started with Firebase Authentication on the web - Firebase Fundamentals" which goes through the setup, step by step. Did you have a chance to watch that video?

  • @dheer211
    @dheer211 2 роки тому +1

    Is the API service setup as a private Cloud Run service or is it publicly accessible ideally should be protected from public access
    ?

    • @TheMomander
      @TheMomander 2 роки тому

      Good question; we didn't talk about that in the video. You get to choose. In the video we set up a service that can accept traffic from the Internet, but we could also make it accessible to internal traffic only, or to traffic from a load-balancer only. Independently of that, you can set if any incoming traffic should be accepted or if it has to be authenticated with Cloud IAM.

  • @vijay80
    @vijay80 2 роки тому +1

    I have a very basic doubt. I want to use nginx along with cloud storage, we cannot really access the files in cloud storage like accessing them from a file system (for example mounting a volume and accessing files just like accessing them from local file system). Do nginx can serve files from cloud storage?

    • @TheMomander
      @TheMomander 2 роки тому +1

      You could try mounting the Cloud Storage bucket as a drive using FUSE.

  • @soufianeodf9125
    @soufianeodf9125 5 місяців тому

    we know that cloud run cannot run inside a VPC, so why did he put the backend cloud run service inside the yellow box ???

  • @anilmm2005
    @anilmm2005 2 роки тому

    Thanks for the great topic. Can you please help in building a video for serverless MLops using vertex ai pipeline

    • @TheMomander
      @TheMomander 2 роки тому +1

      Thank you for the excellent suggestion! I'm afraid I don't know a lot about ML Ops, but my coworker Priyanka created a great video about it. Search UA-cam for "End-to-end MLOps with Vertex AI" and you will find it. Best of luck!