Firstly, thank you for these videos, they are very useful and you explained them superbly well. For QN102, the correct answer seems to be option B instead of option A. As SSE-S3 do not provide automatic key rotation which is a requirement in the question. AWS KMS customer managed KMS key however does provide automatic key rotation and the default is 365 days.
Thanks for giving the practice questions . watched all the five parts , i am able to understand the method, concepts nd techniques.Achieved certification today. Again thanks for the help, keep going
Thank you so much for these 150 Questions. I just cleared the exam today. Out of 65 questios 20 were same wording to wording from these 150 questions and if you understand the tricks which were told in these practice questions of 150, you will definately clear the certification. Thanks for these video.
@johnyart862 Is it enough to take Udemy course and watch these videos to pass certification? I have completed my Udemy SAA course. And going through this series.
@@nazeebahmed3033 Quite similar. On the day mine had more EKS and CloudFormation questions than I practiced here, but there were a few from his videos that came up word-for-word like the one on composite alarms for CloudWatch
Hi, Thank you for all your AWS SAA-C03-related Detail explanation on topics and Practice questions. The way you explained helped me a lot to clear the certification. Thank you so much once again.
Thanks a lot bro .Just got email that I passed my exam . Your questions series helped me a lot and out of 150 questions , 4 questions were exactly same in exam.
Q145 - I think it should be A. The question is which solution is most cost-effective for dev env, and reducing number of instances that runs all the time is more cost efficient than limiting maximum number of instances in ASG. With option A we can still scale up dev env to test high availability.
Can you cover 132 and why it would not be D eventbridge? Couldnt eventbridge be triggered once the ec2 environment has been established and sent a SNS to the operations team?
Hi Akaash, Today I cleared my AWS SAA C03 exam bro.Thanks for your video.The way you are explaining the questions and eliminating the options are awesome 😎😎
126: C and D is correct answer. Nowhere in the question we see that we need data analysis, so A is out. Connecting KDS and KDF in one chain is overkill.
Q 117. But in option A, it does not state that the required infra will be hosted in second region as compared to option D. It is primary requirement for disaster recovery. I feel this question is poorly made.
Hi, I think that for Q146, the correct answer should be B. Your reasoning is entirely correct, however it misses the fact that no NAT gateway is mentioned for answer D. To the best of my knowledge, it is necessary to deploy a NAT gateway within the public subnet in order to reroute traffic, isn't it?
Q 117 is the answer not D as the question also states can tolerate up to 30 minutes of downtime and potential data loss. If they are willing to take that and there is not mention of cost effective either.
Hi Akash Kumar Nanda, I'm happy to announce that I have passed the SAA-C03 exam by scoring 788 (78.8%). Thank you so much for your videos. These videos have been incredibly helpful, and I've understood several concepts.
Q 117 can you pls explain why it is not D as the question states create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss.
I think D is the right answer. The question asks for a disaster recovery (for the whole app) with 30 minutes. Option A only puts DB in a different region. What about the rest of the components? You cannot fail-over to a DB alone (w/o EC2 instances)
For 147: The correct combination of actions would be: - *A. Enable binlog replication on the RDS primary node*. - *C. Allow long-running transactions to complete on the source DB instance*. ### The actions a Solutions Architect should take before implementing a read replica: #### 1. *A. Enable binlog replication on the RDS primary node.* - *Explanation: A read replica requires the **binary log (binlog)* to be enabled on the primary RDS instance. Binlog replication is necessary because the data changes on the source instance are captured and sent to the replica. Without enabling binlog replication, a read replica cannot be created or function correctly. - *Why necessary*: This is a prerequisite for creating a read replica, as it ensures that updates to the primary instance are recorded and replicated to the read replica. #### 2. *C. Allow long-running transactions to complete on the source DB instance.* - *Explanation: Before creating a read replica, it's essential to ensure that **long-running transactions* on the source DB instance are allowed to complete. If there are incomplete transactions, they could cause data inconsistencies between the primary DB and the replica, leading to synchronization issues. - *Why necessary*: Data integrity is crucial, and long-running transactions need to be finished to ensure the replica starts with the same data as the primary DB. ### Why not E option: - *E. Enable automatic backups on the source instance by setting the backup retention period*: This option relates to backups, which is a good practice but unrelated to creating a read replica. Backups are more about recovery, while replication is about real-time data synchronization.
For Q145: The best Answer could Be A: why Option A may be a better choice for cost-effectiveness. ### *Option D: Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group.* - This option suggests reducing the *maximum number* of instances that the Auto Scaling group can scale up to. This means the system is still capable of scaling up to multiple instances (though fewer than before) based on load, even if traffic is low. - *Cost-effectiveness*: It does reduce costs, but the system could still scale up to more instances than necessary, even during low-traffic periods. This means you may be running more EC2 instances than actually required for a development environment, potentially incurring higher costs. - *Scalability*: This option is more suited for environments that expect variable traffic but want to limit the peak number of instances. However, the development environment likely has predictable low traffic, making this approach unnecessary. ### *Option A: Reconfigure the target group in the development environment to have only one EC2 instance as a target.* - *Cost-effectiveness: This option directly reduces the number of EC2 instances to **one*, ensuring the lowest possible cost for the development environment, which likely does not need to scale up and handle high traffic. - *Scalability*: In a development environment, especially if low traffic is expected, there’s no need for multiple instances or Auto Scaling. Since the traffic is likely predictable, having one instance is the simplest and most cost-effective solution. ### *Why not Option D?* 1. *Auto Scaling is unnecessary* for the development environment: - If the development environment has *low and predictable traffic*, there's no need for multiple instances or Auto Scaling. - Option D still allows for *multiple instances*, which could lead to higher costs even if the system only scales to a reduced maximum number of instances. The development environment doesn't need this flexibility. 2. *Simplicity*: - Reconfiguring to a single EC2 instance (Option A) simplifies the architecture. In a development environment, keeping things simple is usually more efficient and cost-effective, especially if high availability or scaling isn't critical. 3. *Direct cost-saving*: - By using only *one EC2 instance* in the development environment (Option A), the company minimizes costs effectively without relying on Auto Scaling. Auto Scaling might not be as cost-effective in a development environment with predictable, low traffic. ### *Conclusion*: While *Option D* reduces costs by limiting scaling, *Option A* is *more cost-effective* because it removes unnecessary complexity and scales down the development environment to just one EC2 instance. This aligns perfectly with the goal of minimizing costs for a low-traffic development environment.
@@peaceofcode the answers To improve the read performance of a database in Amazon RDS for MySQL by adding a read replica, you should take the following actions: Enable binlog replication on the RDS primary node: This allows the primary node to stream its binary logs to the read replica, enabling data replication. A. Enable binlog replication on the RDS primary node. Allow long-running transactions to complete on the source DB instance: Before creating a read replica, it's advisable to let any long-running transactions complete to ensure consistency between the source and the replica. C. Allow long-running transactions to complete on the source DB instance. .
For Q 102: In my mind, the Correct Answer Is b. ### Explanation *AWS Key Management Service (AWS KMS)*: AWS KMS is a managed service that makes it easy to create and control encryption keys used to encrypt your data. The keys are protected in hardware security modules. KMS is integrated with other AWS services making it easier to encrypt data you store in these services. *Customer Managed Keys*: These are KMS keys that you manage and control. Unlike AWS managed keys, you can configure these keys to enable automatic rotation, where AWS KMS automatically rotates the keys every year. *Automatic Key Rotation*: This feature simplifies the key management process by automatically creating a new cryptographic material for your KMS keys every year. It reduces the operational overhead associated with manually rotating your encryption keys and enhances security by limiting the time window that a single key is in use. *Set S3 Bucket Encryption*: By setting the S3 bucket to automatically use the customer managed KMS key for encryption, all data uploaded to the bucket will be encrypted under this key. The integration of S3 with KMS allows for seamless encryption and decryption of your data using the defined keys. ### Why A Option Are Less Suitable - *Option A*: Using SSE-S3 with Amazon managed keys provides server-side encryption, but it doesn't support the requirement for key rotation as managed by the customer.
@@peaceofcode I passed on my second attempt, thanks to the AWS SAA-C03 playlist. Concepts/theory videos + practice questions are golden nuggets of information. Thank you again for keeping it post
how come the q 121 is d .The files needs to be accesed concurrently and since app is running on linux .wouldn't it should be EFS then .I knoe wfs support is for Linux and can be parallely accessed.
Request to re-confirm Answer for Q102. Will be it be "A" or "B" ? as Encryption has to be enabled before data movement , and not after data movement ..please confirm
Ok ill look into it… and if you are claiming another answer then pls provide a reference or a explanation for your choice… so that other learners and also correspond…
I think ans is D cause they have given that can tolerate 30 minutes and potential data loss which point towards creating infra using aws backup and data loss will be there while creating the another db
@@peaceofcode Sir I have passed the exam. I watched all your videos. Explanation is very good. I have been working on AWS for two years so the concepts were clear. 🙏
I cleared my exam today! This channel has been very helpful. Top notch explanation. Appreciate you!
I'm now a SAA thanks to your training and education. Thank you.
That's amazing, Congratulations!!
@100Jim Was the exam much difficult or there were easy questions too? Planning to give next week.
@100Jim planning to give it next week too, please it'll be great if you'll let me know asap any suggestions
@@RohitTawade yes. It was hard. There were some easy questions as well. They are only easy if you know them though.
Passed my SAA CO3 certification 2nd august, thanks to this amazing channel and Aakash. saw similar questions on the test from these videos.
Firstly, thank you for these videos, they are very useful and you explained them superbly well.
For QN102, the correct answer seems to be option B instead of option A.
As SSE-S3 do not provide automatic key rotation which is a requirement in the question.
AWS KMS customer managed KMS key however does provide automatic key rotation and the default is 365 days.
option a is the crt answer it will support
@@chrismorris3413 Hi, "A" is correct....
"...As an additional safeguard, S3 encrypts the key itself with a master key that it rotates regularly."
Thanks to these videos, i succed the Solution Architect exam easily. Thank you a lot.
Thank you bro, i just passed my SAA certification exams upon watching all your videos. You really helped me. May Allah increase you in knowledge.
Good job, keep on keeping on. these are some of the resources I used in my SAA journey and passed.
Thanks for giving the practice questions . watched all the five parts , i am able to understand the method, concepts nd techniques.Achieved certification today.
Again thanks for the help, keep going
Glad to hear that, congratulations!!
I like how you explain all the questions with detailed explanation on the solutions. I also love the humor you added. Made it fun. Thank you.
Thank you for helpful videos , I successfully cleared my exam. I clear so many concept just watching your practice question series. You are amazing!!
I have cleared AWS SAA C03. Few questions I found from this series. Thank you!
Congratulations!!
Thank you so much for these 150 Questions. I just cleared the exam today. Out of 65 questios 20 were same wording to wording from these 150 questions and if you understand the tricks which were told in these practice questions of 150, you will definately clear the certification.
Thanks for these video.
Thank you very much … i have cleared my SAA-C03 exam… Thank you very much
Thanks to this videos I cleared my exam today! you are awesome keep doing it!
Thank you so much for all your efforts in explaining the concepts! I cleared the SAA exam today!
Just Cleared SAA today, went through udemy course first then watch and practice all your 5 videos. Thanks!!
@johnyart862 Is it enough to take Udemy course and watch these videos to pass certification? I have completed my Udemy SAA course. And going through this series.
@@RohitTawade yes it will be more than enough to pass. i did the same.
@@johnyart862 Thanks buddy for your quick response.
Well Done!!
@@RohitTawadeI new to appear for exam, can you please suggest some udemy which to follow? Thanks in advance
Thank you for all your content, it helped me a lot to prepare for the exam and get my Solutions Architect certification!
Thank you
Thanks to your videos I passed SAA on Friday! Thank you!
@@nazeebahmed3033 Quite similar. On the day mine had more EKS and CloudFormation questions than I practiced here, but there were a few from his videos that came up word-for-word like the one on composite alarms for CloudWatch
Great job!
@@peaceofcode Thanks, looking to do Developer Associate next
Thank you!! dude your tutorials are very helpful! just pass the exam TODAY!
Great job!
Great Video - One correction - Question 108 - B is incorrect because you don't attach a VPC Gateway to an AZ. Answer A is correct.
That's it, you're right!
Also you can't attach SG to Gateway endpoint
I earned my certificate today...
Many many thanks to you...
Your series helped me a lot...
Congratulations🎉
Did you get a similar kind of question as discussed in the questionnaires?
@@mdsohail5371 almost four to five questions
Hi, Thank you for all your AWS SAA-C03-related Detail explanation on topics and Practice questions. The way you explained helped me a lot to clear the certification.
Thank you so much once again.
Glad it was helpful!
Thank you very much for the videos sir, also waiting for more videos of AWS SA playlist about databases and others🙏🙏
Will upload soon
Thanks a lot bro .Just got email that I passed my exam . Your questions series helped me a lot and out of 150 questions , 4 questions were exactly same in exam.
Awesome!!
@@peaceofcode Is real exam easy to pass?
In Q105, you have selected "create a private bucket" which is wrong.
Q145 - I think it should be A. The question is which solution is most cost-effective for dev env, and reducing number of instances that runs all the time is more cost efficient than limiting maximum number of instances in ASG. With option A we can still scale up dev env to test high availability.
Can you cover 132 and why it would not be D eventbridge? Couldnt eventbridge be triggered once the ec2 environment has been established and sent a SNS to the operations team?
Question 107, answer should "B" because DAX is a read & write cache, specially optimized for DynamoDB... what are your thoughts ??
Hi Akaash,
Today I cleared my AWS SAA C03 exam bro.Thanks for your video.The way you are explaining the questions and eliminating the options are awesome 😎😎
126: C and D is correct answer. Nowhere in the question we see that we need data analysis, so A is out. Connecting KDS and KDF in one chain is overkill.
question 133 , the account is already created , so it needs to be added to a group and MFA are the answers
I think mfa and inline policy are the answers @dannyhd8301
Q 117. But in option A, it does not state that the required infra will be hosted in second region as compared to option D. It is primary requirement for disaster recovery. I feel this question is poorly made.
Learning so much with this series
Thank you so much just got certified today, your videos are Fire!!!!!
Did you get a similar kind of question as discussed here?
Hi, I think that for Q146, the correct answer should be B. Your reasoning is entirely correct, however it misses the fact that no NAT gateway is mentioned for answer D. To the best of my knowledge, it is necessary to deploy a NAT gateway within the public subnet in order to reroute traffic, isn't it?
Thanks for the input, ill look into it
No we are talking about the inbound traffic, not outbound traffic.
Hi, D isn't correct answer ? why would we create metadata table and then file coversion and save to s3?
Q 117 is the answer not D as the question also states can tolerate up to 30 minutes of downtime and potential data loss. If they are willing to take that and there is not mention of cost effective either.
Hi Akash Kumar Nanda, I'm happy to announce that I have passed the SAA-C03 exam by scoring 788 (78.8%). Thank you so much for your videos. These videos have been incredibly helpful, and I've understood several concepts.
Thank you very much. I cleared my exam.
I have doubt some of your answers. In Q-102 the answer is B.
@peaceofcode Q135 why we are using memory optimized instance instead of compute
I'm lost on that as well. Although the compute optimized only replicates select tables which makes C a more complete/accurate solution.
Question 146, C is correct, we usually use NAT Gateway to provide access to the internet for EC2 instances in private subnets.
Q 117 can you pls explain why it is not D as the question states create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss.
I think D is the right answer. The question asks for a disaster recovery (for the whole app) with 30 minutes.
Option A only puts DB in a different region. What about the rest of the components? You cannot fail-over to a DB alone (w/o EC2 instances)
For 147:
The correct combination of actions would be:
- *A. Enable binlog replication on the RDS primary node*.
- *C. Allow long-running transactions to complete on the source DB instance*.
### The actions a Solutions Architect should take before implementing a read replica:
#### 1. *A. Enable binlog replication on the RDS primary node.*
- *Explanation: A read replica requires the **binary log (binlog)* to be enabled on the primary RDS instance. Binlog replication is necessary because the data changes on the source instance are captured and sent to the replica. Without enabling binlog replication, a read replica cannot be created or function correctly.
- *Why necessary*: This is a prerequisite for creating a read replica, as it ensures that updates to the primary instance are recorded and replicated to the read replica.
#### 2. *C. Allow long-running transactions to complete on the source DB instance.*
- *Explanation: Before creating a read replica, it's essential to ensure that **long-running transactions* on the source DB instance are allowed to complete. If there are incomplete transactions, they could cause data inconsistencies between the primary DB and the replica, leading to synchronization issues.
- *Why necessary*: Data integrity is crucial, and long-running transactions need to be finished to ensure the replica starts with the same data as the primary DB.
### Why not E option:
- *E. Enable automatic backups on the source instance by setting the backup retention period*: This option relates to backups, which is a good practice but unrelated to creating a read replica. Backups are more about recovery, while replication is about real-time data synchronization.
Q115 - it states lowest cost - hence answer B, not A. As we have a month to transfer 700 TB. please clarify. Thanks
Let me check!!
Did you check?
@@rohanrajsingh9939 700TB with 500Mbps did you compute how long does it take ?
For Q145:
The best Answer could Be A:
why Option A may be a better choice for cost-effectiveness.
### *Option D: Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group.*
- This option suggests reducing the *maximum number* of instances that the Auto Scaling group can scale up to. This means the system is still capable of scaling up to multiple instances (though fewer than before) based on load, even if traffic is low.
- *Cost-effectiveness*: It does reduce costs, but the system could still scale up to more instances than necessary, even during low-traffic periods. This means you may be running more EC2 instances than actually required for a development environment, potentially incurring higher costs.
- *Scalability*: This option is more suited for environments that expect variable traffic but want to limit the peak number of instances. However, the development environment likely has predictable low traffic, making this approach unnecessary.
### *Option A: Reconfigure the target group in the development environment to have only one EC2 instance as a target.*
- *Cost-effectiveness: This option directly reduces the number of EC2 instances to **one*, ensuring the lowest possible cost for the development environment, which likely does not need to scale up and handle high traffic.
- *Scalability*: In a development environment, especially if low traffic is expected, there’s no need for multiple instances or Auto Scaling. Since the traffic is likely predictable, having one instance is the simplest and most cost-effective solution.
### *Why not Option D?*
1. *Auto Scaling is unnecessary* for the development environment:
- If the development environment has *low and predictable traffic*, there's no need for multiple instances or Auto Scaling.
- Option D still allows for *multiple instances*, which could lead to higher costs even if the system only scales to a reduced maximum number of instances. The development environment doesn't need this flexibility.
2. *Simplicity*:
- Reconfiguring to a single EC2 instance (Option A) simplifies the architecture. In a development environment, keeping things simple is usually more efficient and cost-effective, especially if high availability or scaling isn't critical.
3. *Direct cost-saving*:
- By using only *one EC2 instance* in the development environment (Option A), the company minimizes costs effectively without relying on Auto Scaling. Auto Scaling might not be as cost-effective in a development environment with predictable, low traffic.
### *Conclusion*:
While *Option D* reduces costs by limiting scaling, *Option A* is *more cost-effective* because it removes unnecessary complexity and scales down the development environment to just one EC2 instance. This aligns perfectly with the goal of minimizing costs for a low-traffic development environment.
I come Herr when I'm burned out form taking practice test. Are these questions coming from exam topics?
Hi, Can you please go a bit more for Q147 option D? why it is one of the right answers for the Q?
Yes let me look into it will get back to you
@@peaceofcode the answers
To improve the read performance of a database in Amazon RDS for MySQL by adding a read replica, you should take the following actions:
Enable binlog replication on the RDS primary node: This allows the primary node to stream its binary logs to the read replica, enabling data replication.
A. Enable binlog replication on the RDS primary node.
Allow long-running transactions to complete on the source DB instance: Before creating a read replica, it's advisable to let any long-running transactions complete to ensure consistency between the source and the replica.
C. Allow long-running transactions to complete on the source DB instance.
.
when is part4 coming out? taking my test next week hopefully thanks.
Soon!!!!
Q-116, Answer is C instead of B.
@vikasShsrma-wu9wc can you explain why c is correct and not B
Thank you for doing this video
What is the best method to give an exam online or offline?
Online is preferred by many people
Offline in exam center will be safest .....
You sir , are amazing !!!!!
Question 115, you cannot transition directly to S3 Glacier, you have to first deploy it to S3
Thanks for the correction!!
@@peaceofcode taking saa tomorrow, wish me good luck!
So what's the correct answer?
@@sagitokishev2894, did you pass the exam?
@@mandurikarthik yes
Thank you so much!!! I paased exam today
Congrats
question 145 was confusing didnt understand it at all
For Q 102:
In my mind, the Correct Answer Is b.
### Explanation
*AWS Key Management Service (AWS KMS)*: AWS KMS is a managed service that makes it easy to create and control encryption keys used to encrypt your data. The keys are protected in hardware security modules. KMS is integrated with other AWS services making it easier to encrypt data you store in these services.
*Customer Managed Keys*: These are KMS keys that you manage and control. Unlike AWS managed keys, you can configure these keys to enable automatic rotation, where AWS KMS automatically rotates the keys every year.
*Automatic Key Rotation*: This feature simplifies the key management process by automatically creating a new cryptographic material for your KMS keys every year. It reduces the operational overhead associated with manually rotating your encryption keys and enhances security by limiting the time window that a single key is in use.
*Set S3 Bucket Encryption*: By setting the S3 bucket to automatically use the customer managed KMS key for encryption, all data uploaded to the bucket will be encrypted under this key. The integration of S3 with KMS allows for seamless encryption and decryption of your data using the defined keys.
### Why A Option Are Less Suitable
- *Option A*: Using SSE-S3 with Amazon managed keys provides server-side encryption, but it doesn't support the requirement for key rotation as managed by the customer.
Can you please also make practice test questions for AWS developer certification ?
One series is already in the developer associate playlist pls check
105, I think D is a correct answer. CLI is a burden..
Thanks for keeping it post
Any time!
@@peaceofcode I passed on my second attempt, thanks to the AWS SAA-C03 playlist. Concepts/theory videos + practice questions are golden nuggets of information. Thank you again for keeping it post
hello - when is Part-4 being released. Thanks
It will be released soon!!
Thanks!
Thank you!!
Can you give me an idea of how can i practice dump questions from the internet without buying?
Don't buy all are same
@@ajitdalvi596 thanks for your reply. same as these videos ?
You can check my Aws Saa-co3 video in the playlist, i have explained all the tricks
I am preparing for SysOps i need resources plz help me
Udemy 😊
@peace of code Aakshay are you conducting live training
Yes
Please share contact number
When will you upload the next part, thank you.
Very soon!
Thankyou so much for creating this series..It's really helpful.
how come the q 121 is d .The files needs to be accesed concurrently and since app is running on linux .wouldn't it should be EFS then .I knoe wfs support is for Linux and can be parallely accessed.
Most cost - effectively = S3
EFS is expensive when compare to S3
@@shalmankhan1418 Valid point
I think qus-117 ans-d via chatgpt 😇😇
Request to re-confirm Answer for Q102. Will be it be "A" or "B" ? as Encryption has to be enabled before data movement , and not after data movement ..please confirm
Waiting for the last 50 questions? any soon?
Yes very soon!!!!
@@peaceofcode A great thanks to you, bro. Literally, your voice was in my ear when I was in the exam hall...
Q-110, I think the Answer is C instead of D
You are right. C is right answer
@@abhirammishra7096 why C is correct
Q-117, Answer is C instead of A.
Ok ill look into it… and if you are claiming another answer then pls provide a reference or a explanation for your choice… so that other learners and also correspond…
I think ans is D
cause they have given that can tolerate 30 minutes and potential data loss
which point towards creating infra using aws backup and data loss will be there while creating the another db
if my score here is 90 % can I pass real exam?
Its also very important to keep your concepts clear
@@peaceofcode Sir I have passed the exam. I watched all your videos. Explanation is very good. I have been working on AWS for two years so the concepts were clear. 🙏
All difficult qns in video 3😢
Yes we also need to practice tough questions…
Agree, hardest so far
Thanks to this videos I cleared my exam today! you are awesome keep doing it!