You guys have really good contributions over there, which are very friendly and informative. Especially for example knowing about that information for the exams is really useful. Thank you guys!
The documentation on this is kind of lacking. I've tried every variation of setup I can find online and still get a "Permission storage.objects.list denied on resource" error every single time. My service account is properly setup and has the correct permissions/access to my bucket and is connected to my pool. My github secrets are the correct email and provider and they're setup in my workflow file which runs correctly each time. However permission is still denied. I've gone through dozens of stackoverflow and similar threads of people experiencing the same error, tried their solutions and nothing. Pretty frustrating experience when all I'm trying to do is read a couple of CSV files for an app so I don't have to manually download and push them once a week
Sorry to hear you're getting error messages. I sometimes run into similar errors, especially with Cloud Storage. The strategy that has worked best for me is to temporarily make the service account "Editor" of the project, make sure it works, and then dial back the permissions.
@@gb-wj7dg Sorry to hear it didn't work. Well, at least we know it's not something wrong with the permissions of the service account. If I were you, I'd ask the Google Cloud sub-reddit next. I have seen developers discuss WIF there before.
@googlecloudtech Is it possible to access Google Cloud assets in one account from another Google Cloud account (belonging to another organisation)? For this use case, when creating a workload identity pool can OIDC be used? If OIDC can be used, what values to use as Provider ID and Issuer (URL) in case of Google Cloud?
How this will work in reusable workflow repo. I have a repo CICD and configured with WIF. Then I am calling this reusable workflows from other repos which are not configured with WIF. I am getting 403 permissions denied error. Is this even possible??
The UI shown here is different from the current one, In the grant access with service account impersonation menu, i need to fill in: Select principals (identities that can access the service account): attribute name: ... attribute value:... What should i fill this with ?
I don't fully understand your question, but if you search for "google cloud workload identity federation" you will get some pretty good results. Do those docs have what you need? If not, please elaborate on what you are looking for and I will try to find it.
Does the "workload identity provider ID" and "service account" have to be a secret? Can they just be plain text? Would be nice to know the security impications of that? Thanks for the video :)
While they both aren't considered nor contain "secrets" (as in an attacker acquiring them wouldn't be able to do anything in the immediate) my recommendation would still be to treat them sensitively. Both the "workload identity provider id" with project number & the pool id for the given project and SA email itself I would say are sensitive pieces of information.
@@lukapuka1296 I think in a public repository, it's probably best to abstract them to variables, but for the purpose of variability and easily cycled out by others who may use the app in conjunction with WIF. However, in a private repository, or more specifically in situations where they will rarely change, these things are perfectly fine to commit and in fact can _benefit_ from visibility in the git commit log. In this case, I see no benefit from a _security_ perspective aside from "security through obscurity," (to be blunt 😅) since knowing the service account that the application is impersonating or which WIF pool it's using shouldn't matter at all unless you can get your code to run in the workload that is being authenticated. That is, it can only be used in the context of the workload itself, (like if you already had push/merge access where your action or pipeline runs), at which point those variables are now exposed anyway. It's authenticating the _workload_ and if your workload is compromised, your secrets are already compromised. That's the beauty of WIF, since it helps to reduce the need for (or even get rid of) secrets at the workload level.
Can Google's Workload Identity Federation be used if I want my back-end application which is running outside the Google Cloud to access Google APIs such as Calendar API? If not, what's the best solution for doing this in production environment? I don't want to use Service access key.
I typically see Workload Identity Federation used to authenticate workloads looking to access GCP services (not necessarily RESTful API's directly) without storing long lived keys. A close example might be your back-end application (with the assumption that it supports OIDC) using WIF to authenticate to/invoke a private Cloud Run Service (which in theory could have an endpoint deployed to access the Workspace Calendar API).
Can you make a video on publishing the npm package to the Google Artifact repository? Also, this video is a little bit outdated, and Google documentation is only for the lawyer not for developers. Can you point to the relevant video?
I am using terraform module in one repository in github and manifest in another repository. When try to run terraform it is unable to authorize module. How can we achieve this with workload identity federation?
I'm not entirely sure I follow the scenario. A few questions I have; How is the terraform module in repo one being used, within a GitHub Actions Workflow file? And how does the manifest in the other repository relate?
If you are getting that display it means that when attributes are setup in the Provider setup (ie. attribute.repository), you'll need to select that given attribute and input the matching value (from the GitHub OIDC token) that you'd expect. So if I select the "repository" attribute and the value I input is "MyUser/MyRepo" in the Grant Access step, impersonation will only be successful if the OIDC token exchanged in the GitHub Actions workflow contains that repository value. Essentially this step ensures that only identities with a certain value can authenticate.
Good information but way to scripted and cringe to watch. Use your own words and trust the knowledge of the persons talking instead of scripting every word that is said. It is a pain to watch.
Subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech
Excellent video, this was one of the few things chatGPT couldn't clearly explain how to set up.
Happy to hear it was useful. Sometimes that human touch is needed.
You guys have really good contributions over there, which are very friendly and informative. Especially for example knowing about that information for the exams is really useful. Thank you guys!
Happy to hear the video was helpful!
The documentation on this is kind of lacking. I've tried every variation of setup I can find online and still get a "Permission storage.objects.list denied on resource" error every single time. My service account is properly setup and has the correct permissions/access to my bucket and is connected to my pool. My github secrets are the correct email and provider and they're setup in my workflow file which runs correctly each time. However permission is still denied. I've gone through dozens of stackoverflow and similar threads of people experiencing the same error, tried their solutions and nothing. Pretty frustrating experience when all I'm trying to do is read a couple of CSV files for an app so I don't have to manually download and push them once a week
Sorry to hear you're getting error messages. I sometimes run into similar errors, especially with Cloud Storage. The strategy that has worked best for me is to temporarily make the service account "Editor" of the project, make sure it works, and then dial back the permissions.
@@TheMomander thanks for the tip, hadn't seen this specfic role recommended. unfortunately this still produced the same error
@@gb-wj7dg Sorry to hear it didn't work. Well, at least we know it's not something wrong with the permissions of the service account. If I were you, I'd ask the Google Cloud sub-reddit next. I have seen developers discuss WIF there before.
@@TheMomander ok thank you, I'll try my luck there
@googlecloudtech Is it possible to access Google Cloud assets in one account from another Google Cloud account (belonging to another organisation)? For this use case, when creating a workload identity pool can OIDC be used? If OIDC can be used, what values to use as Provider ID and Issuer (URL) in case of Google Cloud?
How this will work in reusable workflow repo. I have a repo CICD and configured with WIF. Then I am calling this reusable workflows from other repos which are not configured with WIF. I am getting 403 permissions denied error. Is this even possible??
Thanks for the wonderful session, it really starts my monday with new learning, will be waiting for more security focused videos.
Happy to hear that! I agree with you that security is important.
The UI shown here is different from the current one,
In the grant access with service account impersonation menu, i need to fill in:
Select principals (identities that can access the service account):
attribute name: ...
attribute value:...
What should i fill this with ?
is there a document to go through all the provider mapping options?
I don't fully understand your question, but if you search for "google cloud workload identity federation" you will get some pretty good results. Do those docs have what you need? If not, please elaborate on what you are looking for and I will try to find it.
Does the "workload identity provider ID" and "service account" have to be a secret? Can they just be plain text? Would be nice to know the security impications of that?
Thanks for the video :)
While they both aren't considered nor contain "secrets" (as in an attacker acquiring them wouldn't be able to do anything in the immediate) my recommendation would still be to treat them sensitively. Both the "workload identity provider id" with project number & the pool id for the given project and SA email itself I would say are sensitive pieces of information.
Great questions btw!
Thank you :)
@@lukapuka1296 I think in a public repository, it's probably best to abstract them to variables, but for the purpose of variability and easily cycled out by others who may use the app in conjunction with WIF. However, in a private repository, or more specifically in situations where they will rarely change, these things are perfectly fine to commit and in fact can _benefit_ from visibility in the git commit log.
In this case, I see no benefit from a _security_ perspective aside from "security through obscurity," (to be blunt 😅) since knowing the service account that the application is impersonating or which WIF pool it's using shouldn't matter at all unless you can get your code to run in the workload that is being authenticated. That is, it can only be used in the context of the workload itself, (like if you already had push/merge access where your action or pipeline runs), at which point those variables are now exposed anyway.
It's authenticating the _workload_ and if your workload is compromised, your secrets are already compromised. That's the beauty of WIF, since it helps to reduce the need for (or even get rid of) secrets at the workload level.
Can Google's Workload Identity Federation be used if I want my back-end application which is running outside the Google Cloud to access Google APIs such as Calendar API? If not, what's the best solution for doing this in production environment? I don't want to use Service access key.
I typically see Workload Identity Federation used to authenticate workloads looking to access GCP services (not necessarily RESTful API's directly) without storing long lived keys. A close example might be your back-end application (with the assumption that it supports OIDC) using WIF to authenticate to/invoke a private Cloud Run Service (which in theory could have an endpoint deployed to access the Workspace Calendar API).
Can you make a video on publishing the npm package to the Google Artifact repository?
Also, this video is a little bit outdated, and Google documentation is only for the lawyer not for developers. Can you point to the relevant video?
I am using terraform module in one repository in github and manifest in another repository. When try to run terraform it is unable to authorize module. How can we achieve this with workload identity federation?
I'm not entirely sure I follow the scenario. A few questions I have; How is the terraform module in repo one being used, within a GitHub Actions Workflow file? And how does the manifest in the other repository relate?
Great stuff. Would be great if you guys could link the mentioned documentation here for reference. 🥂
Now the UI has changed and there are many other options that we need to fill to Grant Access. Please make a video on that
Beautiful presentation Martin and Luka, how can i get the previous video on service account key?
Happy to hear you found the video useful. To find the other video, do a UA-cam search for "How to deploy Cloud Run services with GitHub Actions".
Can we use the WIF approach for Firebase services, such as deploying Firebase Hosting and Firebase functions with GitHub Actions?
Sorry, I don't believe that's possible yet.
When i try to grant access to service account, the option for "All identites in the pool" under select principal is not there.
If you are getting that display it means that when attributes are setup in the Provider setup (ie. attribute.repository), you'll need to select that given attribute and input the matching value (from the GitHub OIDC token) that you'd expect. So if I select the "repository" attribute and the value I input is "MyUser/MyRepo" in the Grant Access step, impersonation will only be successful if the OIDC token exchanged in the GitHub Actions workflow contains that repository value. Essentially this step ensures that only identities with a certain value can authenticate.
Excelente, de mucha ayuda
outdated
Sounds like Google Engineers played a game to pick only wrong names and ended up with WIF. Good video though
Good information but way to scripted and cringe to watch. Use your own words and trust the knowledge of the persons talking instead of scripting every word that is said. It is a pain to watch.