Hey Ryan, great content - I appreciate your video! Just a couple questions - are saving the artifacts in the S3 buckets (what you're doing in the infrastructure project) mandatory? I assume the artifact bucket creation in the pipeline project is mandatory. I am building a personal API system using API Gateway and Lambda functions - and I figure the stack & construct code for those resources would go into the infrastructure project?
Hey there, thank you! In the infra project, the bucket was just an example of something you might deploy in that stack, it does not need to be a bucket. It could be a lambda function, dynamo table, etc., anything your infrastructure needs that might get updated frequently. In the pipeline project, strictly speaking, the artifact bucket is not necessary. The CDK will actually create one by default if you don't specify it. However, I like making it because it allows you to set the bucket name rather than the randomly generated string of characters the CDK comes up with. For your arch, yes the API gateway and lambdas would go in the infrastructure project. This way as you make updates to and iterate on your endpoints, you can push it to source and have them continuously deployed to test them! Hope this helps.
Hi Ryan, I wonder if I would like to create a micro service application, assume the architecture is there are 3 lambda and 1 api gateway, would the lambda handler code be putting in the infrastructure folder like infrastructure/lib/lambda/handlers and then the spec of these 3 lambda and api gateway would just be stated in the infrastructure-stack.ts file
Yes, any serverless infrastructure would be placed in the infrastructure folder. This will ensure that every time you push changes to source and the pipeline runs, any new lambdas or changes to existing lambdas or the api gateway will be deployed!
@@cloudmancerhey Ryan,after I create the pipeline and the infrastructure code. I wonder for this CICD approach, would the development cycle be if I would like to create a new feature for my app, I first check out the dev branch and after finishing the new feature, I push it to the dev branch of my GitHub and test it with the dev environment. After testing, I would like to deploy to the production. I am not sure which below approach is correct to do. 1. I would need to merge it from the local dev branch to my local main branch and then push it to the main branch of my GitHub for the production environment 2. Send a pull request from the dev branch to the main branch and ask peer to review, if it is good, then merge it to the main branch. And I guess a merge commit will generate to the main branch then trigger codepipeline to execute
I would branch off dev and make a feature branch, work on whatever new feature, and then you could either use the console and open a PR or push directly to dev from your local. This would just depend on how protected your dev branch is and if you are working with other people on the same repo or if its just you and its a personal project. If you wanted to test your feature branch itself, you could always deploy a new CI/CD pipeline and have it point to your feature branch! Just follow the steps we did for making the dev/prod context objects and the allowed envs array and do it for your feature branch, with the --env=yourFeatureName. This would really ensure some extra safety in testing your feature. The nice part about this CI/CD pipeline is you can create/destroy as meany of them as you needed for however many envs you want. Generally speaking, I try to manage any merges into main from the console. I would push whatever you are working on to dev (whether directly or through the console with a PR) and let it deploy through the CI pipeline. If you like it and it works, I would head to GitHub and open a PR from dev into main. Once thats done, you (or a reviewer) can visually check and see all changes and make sure everything looks good before approving and merging into the main branch. This will trigger the pipeline and should deploy everything to prod. If for some reason there is a catastrophic failure from dev to prod you could always just revert main to the previous commit before it blew up and it would retrigger the pipeline to build the previous build!
What videos would you like to see me make next?
Dude, can't believe this channel exists.
I am glad that it is of some value to you!
Hey Ryan, great content - I appreciate your video! Just a couple questions - are saving the artifacts in the S3 buckets (what you're doing in the infrastructure project) mandatory? I assume the artifact bucket creation in the pipeline project is mandatory.
I am building a personal API system using API Gateway and Lambda functions - and I figure the stack & construct code for those resources would go into the infrastructure project?
Hey there, thank you!
In the infra project, the bucket was just an example of something you might deploy in that stack, it does not need to be a bucket. It could be a lambda function, dynamo table, etc., anything your infrastructure needs that might get updated frequently. In the pipeline project, strictly speaking, the artifact bucket is not necessary. The CDK will actually create one by default if you don't specify it. However, I like making it because it allows you to set the bucket name rather than the randomly generated string of characters the CDK comes up with.
For your arch, yes the API gateway and lambdas would go in the infrastructure project. This way as you make updates to and iterate on your endpoints, you can push it to source and have them continuously deployed to test them!
Hope this helps.
Hi Ryan, I wonder if I would like to create a micro service application, assume the architecture is there are 3 lambda and 1 api gateway, would the lambda handler code be putting in the infrastructure folder like infrastructure/lib/lambda/handlers and then the spec of these 3 lambda and api gateway would just be stated in the infrastructure-stack.ts file
Yes, any serverless infrastructure would be placed in the infrastructure folder. This will ensure that every time you push changes to source and the pipeline runs, any new lambdas or changes to existing lambdas or the api gateway will be deployed!
@@cloudmancer thank you very much !
No problem!
@@cloudmancerhey Ryan,after I create the pipeline and the infrastructure code. I wonder for this CICD approach, would the development cycle be if I would like to create a new feature for my app, I first check out the dev branch and after finishing the new feature, I push it to the dev branch of my GitHub and test it with the dev environment. After testing, I would like to deploy to the production.
I am not sure which below approach is correct to do.
1. I would need to merge it from the local dev branch to my local main branch and then push it to the main branch of my GitHub for the production environment
2. Send a pull request from the dev branch to the main branch and ask peer to review, if it is good, then merge it to the main branch. And I guess a merge commit will generate to the main branch then trigger codepipeline to execute
I would branch off dev and make a feature branch, work on whatever new feature, and then you could either use the console and open a PR or push directly to dev from your local. This would just depend on how protected your dev branch is and if you are working with other people on the same repo or if its just you and its a personal project.
If you wanted to test your feature branch itself, you could always deploy a new CI/CD pipeline and have it point to your feature branch! Just follow the steps we did for making the dev/prod context objects and the allowed envs array and do it for your feature branch, with the --env=yourFeatureName. This would really ensure some extra safety in testing your feature. The nice part about this CI/CD pipeline is you can create/destroy as meany of them as you needed for however many envs you want.
Generally speaking, I try to manage any merges into main from the console. I would push whatever you are working on to dev (whether directly or through the console with a PR) and let it deploy through the CI pipeline. If you like it and it works, I would head to GitHub and open a PR from dev into main. Once thats done, you (or a reviewer) can visually check and see all changes and make sure everything looks good before approving and merging into the main branch. This will trigger the pipeline and should deploy everything to prod.
If for some reason there is a catastrophic failure from dev to prod you could always just revert main to the previous commit before it blew up and it would retrigger the pipeline to build the previous build!
Videos about where to get resources or dos to learn these topics, as there are very few courses or channels that teach this level of prod ready code
Great suggestion! I will work on compiling some resources for learning. I agree there are not many good places to find stuff!
Hey just wanted to update you, I compiled some resources into a video and released it a few weeks ago, let me know if this helps at all!