Thanks for the great explanation! how would you go avoid using this environment for the cloud? For example for GCP each environment would be linked to a different GCP Project? and any suggestions for AWS? lastly, is the repo available?
Hello Jaimerv19 First of all sorry for the late response. I make a very generic IT answer "it depends" :) ... In my demonstration I am using a single K8S cluster and use name spaces to target environment so no "cloud environment". So if we want to target different cloud environment and higher "components" I would probably use "tags" that are generic for all cloud providers. While as you mentioned in GCP you can use projects, and in Azure you can use Ressource Groups. What do you think about using the tags? (like that you can use the same names and concepts on all clouds)
Thanks for this great walkthrough! Is it possible de dynamically create a environment during the workflow execution? Defining something like "environment: ${{ any-var }}" Thans again
You can pass the "name" of an environment as a variable as you are showing. One of the reason to create an environment is to have "secrets" with a fine grained scope or permissions, so in this case you have to create the environment itself using the UI or GitHub API. Do not hesitate to describe more your use case and I will be happy to add information
Interesting, I see you had it update development by having a PR open. Which means it will update on each new commit into the open PR, which is nice. But if there are 2 PRs open one with feature1 and another with feature2. This approach would then break down right? PR1 would update the environment, then PR2 would hijack it. Which could happen in the middle of dev1’s testing and give strange session clearing issues / bugs. Updating development of an open PR doesn’t sound like the right approach. Instead when the PR is merge you would update development. If the PR is open and the dev wants an active environment to check against. You can create a temporary pr-env-# environment if they enter a command response into the PR e.g. “make environment” or have it automatic. This way you are not clashing the changes and creating a nightmare for leadership / individual developers in a multi dev team.
Would love to hear your feedback on this. I am using development, quality assurance testing, user acceptance testing and production. Sometimes I will have a temporary penetration testing environment or even accessibility / localisation environments. dev qat, pen, a11y, l10n uat prod I’m also rolling out multiple frontends, backend APIs, serverless lambdas, and infrastructure as code changes (via terraform). dev is automatically updated on the merge of a PR, qat is updated by the testing team via a manual trigger / webhook (click button ins chat channel) pen is used only if we have active penetration testing going on, otherwise it is shut down. And is triggered manually. Same for a11y and l10n. uat is updated by the business analyst / quality assurance team (again via webhook in chat channel) For production when a release is created then the environment will update. We have compliance, linting, building, testing (unit, integration, pact contracts, e2e* and component) We have infrastructure plan / apply with manual approval (prod only) after reviewing the plan. We have updating of assets S3, ECR etc.
First of all , sorry for the late response. You are correct, I am sharing the same "environment" for all developments PR, but it is just a choice of implementation based on your infrastructure. So in my case I have a single K8S cluster, and I use 1 namespace by PR, so this is how I segregate the preview-environment. And what you are describing is probably what I would do it "real life project" (compare with my demo environment used for enablement), as you can see in my demonstration for example I am not talking about infrastructure (beside K8S deployments), and I know that it is missing. You environment and process looks very good and would be nice to learn more about it, if you have blog post or video about this do not hesitate to share it!
Great explanation, thanks!
Good Walkthrough...Please increase Volume
I will do
Thanks for the great explanation!
how would you go avoid using this environment for the cloud?
For example for GCP each environment would be linked to a different GCP Project? and any suggestions for AWS?
lastly, is the repo available?
Hello Jaimerv19
First of all sorry for the late response.
I make a very generic IT answer "it depends" :) ...
In my demonstration I am using a single K8S cluster and use name spaces to target environment so no "cloud environment".
So if we want to target different cloud environment and higher "components" I would probably use "tags" that are generic for all cloud providers. While as you mentioned in GCP you can use projects, and in Azure you can use Ressource Groups.
What do you think about using the tags? (like that you can use the same names and concepts on all clouds)
Thanks for this great walkthrough! Is it possible de dynamically create a environment during the workflow execution?
Defining something like "environment: ${{ any-var }}"
Thans again
Yes, but you need an env variable
You can pass the "name" of an environment as a variable as you are showing.
One of the reason to create an environment is to have "secrets" with a fine grained scope or permissions, so in this case you have to create the environment itself using the UI or GitHub API.
Do not hesitate to describe more your use case and I will be happy to add information
Volume is too low, about 10-15% higher
Will do better next time
Interesting, I see you had it update development by having a PR open. Which means it will update on each new commit into the open PR, which is nice.
But if there are 2 PRs open one with feature1 and another with feature2. This approach would then break down right?
PR1 would update the environment, then PR2 would hijack it. Which could happen in the middle of dev1’s testing and give strange session clearing issues / bugs.
Updating development of an open PR doesn’t sound like the right approach.
Instead when the PR is merge you would update development. If the PR is open and the dev wants an active environment to check against. You can create a temporary pr-env-# environment if they enter a command response into the PR e.g. “make environment” or have it automatic.
This way you are not clashing the changes and creating a nightmare for leadership / individual developers in a multi dev team.
Would love to hear your feedback on this.
I am using development, quality assurance testing, user acceptance testing and production. Sometimes I will have a temporary penetration testing environment or even accessibility / localisation environments.
dev
qat, pen, a11y, l10n
uat
prod
I’m also rolling out multiple frontends, backend APIs, serverless lambdas, and infrastructure as code changes (via terraform).
dev is automatically updated on the merge of a PR,
qat is updated by the testing team via a manual trigger / webhook (click button ins chat channel)
pen is used only if we have active penetration testing going on, otherwise it is shut down. And is triggered manually. Same for a11y and l10n.
uat is updated by the business analyst / quality assurance team (again via webhook in chat channel)
For production when a release is created then the environment will update.
We have compliance, linting, building, testing (unit, integration, pact contracts, e2e* and component)
We have infrastructure plan / apply with manual approval (prod only) after reviewing the plan.
We have updating of assets S3, ECR etc.
First of all , sorry for the late response.
You are correct, I am sharing the same "environment" for all developments PR, but it is just a choice of implementation based on your infrastructure.
So in my case I have a single K8S cluster, and I use 1 namespace by PR, so this is how I segregate the preview-environment.
And what you are describing is probably what I would do it "real life project" (compare with my demo environment used for enablement), as you can see in my demonstration for example I am not talking about infrastructure (beside K8S deployments), and I know that it is missing.
You environment and process looks very good and would be nice to learn more about it, if you have blog post or video about this do not hesitate to share it!