I recently achieved my database engineer certification, and I have to say, this channel was a game changer throughout my journey. The explanations are clear, the content is well-structured, and it covers everything you need to succeed. Highly recommend this channel to anyone preparing for certification. Thank you for the invaluable support!
Got the result yesterday, got 911/1000 . This series helped me a lot. You will see many questions from this series. There are new and unseen questions, but you will form a clear understanding by listening to the explanation and hence the different or unseen question will not stumble you. Thank you sir!! You are my Dronachary to me.
Correction - Question 34 - Answer D Destination in Security Account: The destination Kinesis Data Stream should be created in the security AWS account, as this is where the logs will be analyzed. IAM Role and Trust Policy: An IAM role must be created in the production account with a trust policy that allows CloudWatch Logs to assume this role. This enables CloudWatch Logs to send data to the Kinesis Data Stream in the security account. Cross-Account Permissions: The IAM role in the production account must also have permissions to put records into the Kinesis Data Stream in the security account. Subscription Filter: A subscription filter must be created in the production account to forward CloudWatch Logs to the Kinesis Data Stream.
That's fantastic news! I'm so happy to hear that my videos were helpful in your exam preparation. Congratulations on passing! I wish you all the best in your future endeavors.
For question 19: The answer would be B and C since the lambda functions have a 15 minutes timeout limit and it is mentioned each query can run for more than 15 minutes. Please share your thoughts.
The lambda functions do not run the query, they only trigger it to run in Athena. Its the Athena query that takes 15 minutes or more which does not affect the function, with the step function, you can rerun the function after the query completes.
I have passed my exam today.. got almost 50+ questions from your 4 videos. The explanation about the concepts and the question tips to find the answer helped me a lot.. Watching your videos will be good enough to pass the exam. thank you very much for all that you do to pass the exams... 🙏 Will continue watching your project videos ..
Q34. The correct answer should be D. The architecture should be like this: [AWS Prod Acct CW Log] ---- filter and stream to --> [ AWS Security KDS ]. The KDS should be in the security account.
I passed the exam with flying colours 🎉 I completely recommend this channel . There was around 45 to 47 questions coming from these videos . Thanks a lot for sharing and for methodologies to find the right answer
Thanks @sthithapragnakk - Me & My friend took exam yesterday and we Cleared the exam of DEA - C01, we got around 40-45 questions are from these videos only. Thanks a lot😊
Congratulations on completing your certification! 🎉 I'm so glad the videos helped you with the exam. Thanks for sharing your success, and best of luck with all your future endeavors! 😊🙌
Hello @sthithapragna, I have Passed my DEA-CO1 Exam last week, thank you for the Dumps it really helped me to achieve this milestone. This videos covered most of the questions in the exam, Thanks for the explanation you gave for all the questions.
Congratulations on passing your DEA-C01 exam! 🎉 That's a fantastic accomplishment, and I'm thrilled to have played a part in your success. It's incredibly rewarding to hear that my dumps and explanations were valuable in your preparation. I'm glad that the questions covered in my videos aligned well with the exam content. It's always my goal to provide the most relevant and helpful information for those working towards their AWS certifications. Please don't hesitate to reach out if you have any further questions or need any assistance as you continue your AWS journey. I wish you continued success in your data engineering career!
for question 27, i think it shoud go to C, because C is correct. (KDS -> Redshift) D is wrong as it has more operational overhead (KDS -> KDF -> S3 -> Redshift)
Thank you a lot for this dump! I passed my exam last week and your videos were very helpful. During the exam the explanation you gave for the questions came to my mind and I could finish the exam in less than 40 minutes.
Question 36 : Create an AWS Glue Partition Index and Enable Partition Filtering (Option A): Partition Index: Create an index for the partitions in the AWS Glue Data Catalog. This index helps optimize partition pruning during query planning. Partition Filtering: Enable partition filtering in your Athena queries. Specify the partition key values in the WHERE clause of your SQL queries. This instructs Athena to scan only the relevant partitions, enhancing query performance1. Bucket the Data Based on a Common Column (Option B): Bucketing: Organize the data in the S3 bucket based on a column that the data have in common. Use a consistent hashing algorithm to distribute data evenly across buckets. This allows Athena to efficiently filter and read only relevant data during query execution. WHERE Clause: In your user queries, use the common column in the WHERE clause. This ensures that Athena scans only the relevant buckets, reducing query planning time. How does Athena projection based on s3 prefix will help ?
Athena partition projection based on the S3 prefix can be a powerful tool for optimizing query performance, particularly when dealing with datasets that are highly partitioned across many folders in S3. Normally, Athena queries that involve partitioned tables need to load partition metadata from the AWS Glue Data Catalog before executing the query. If there are a large number of partitions, this can significantly slow down the query startup time as each partition's metadata must be fetched and processed. With partition projection, Athena generates the partition metadata dynamically based on the configuration you provide, such as using prefixes, patterns, or ranges. This eliminates the need to fetch and load partition metadata from the Glue Data Catalog for each query.
#21 AWS Glue Data Quality wasn't created for detecting and/or obfuscating PII, It feels like a decoy answer and B seems more appropriate as you can create a glue job transformation layer to obfuscate PII visually for that purpose
What do you guys think about Q22? The question says current ETL workflow uses AWS Glue + AWS EMR. If we use AWS Glue workflow it cannot trigger jobs on EMR so A shouldn't be the correct answer. It should be B or D. Please help me here.
@@OACisco While Glue Workflows can handle basic orchestration of Glue jobs and crawlers, they: Lack support for integrating EMR steps. Have limited flexibility for complex workflows with branching, retries, or additional service integration. D is a third party service , and may require additional effort. So I would go with Step functions
@sthithapragnakk thank you. I passed my aws data engineer associate last week. I like that you give detailed explanation of why an answer is correct or wrong.
#34 Answer "A" is incorrect. It proposes setting up the destination stream in the prod account which does not align with the requirement to analyze logs in the security account. The correct answer is "D" - Create a destination data stream in the security AWS account. Create an IAM role and a trust policy to grant CloudWatch Logs the permission to put data into the stream. Create a subscription filter in the prod AWS account. Why this option? Account: By creating the Kinesis Data Stream in the security AWS account, you ensure the logs are stored directly within the env intended for security log analysis. This setup centralizes security log management and enhances data security and governance. IAM Role and Trust Policy: This role and policy setup enables the prod account to securely send logs to the Kinesis Data Stream in the security account. The trust policy allows the prod account to assume the role, thus granting necessary permissions to put data into the stream. Subscription Filter in Prod Account: The subscription filter in CloudWatch Logs of the prod account forwards the logs to the Kinesis Data Streams in the security account. This filter specifies wich log data send, effectively routing the required logs for security analysis.
Congratulations on passing your exam! 🎉 I'm so happy to hear that you made it through. ¡Mil y mil gracias a ti por compartir tu éxito! Wishing you the best in your future endeavors! 😊
Within your Glue workflow, you can add a step that triggers an EMR cluster to perform specific tasks. This step can invoke the EMR API to start a cluster, run a specific job or task on the cluster, and then terminate the cluster once the task is complete.
Congratulations on passing the AWS Data Engineer exam! That's a remarkable accomplishment, and I'm truly honored to have played a part in your success. I'm glad my content proved valuable in your preparation. I strive to create comprehensive and informative resources that simplify complex concepts and help individuals like yourself grasp the intricacies of AWS services. Your success is a testament to the effectiveness of those efforts. I wish you all the best in your future endeavors as an AWS Data Engineer. Please don't hesitate to reach out if you have any further questions or require additional assistance along the way.
Awesome video . 60+ question from here . Thanks a lot sthithapragna. Cleared xam in less than 50 minutes🤣. Guys Please watch all 6 videos. No need to buy udemy course.
That's amazing-congratulations on clearing the exam in less than 50 minutes! 🎉 I'm so glad to hear that the videos helped you so much and that you found them useful. Your recommendation means a lot, and I'm sure others will appreciate the advice. Keep up the fantastic work, and feel free to reach out if you need anything else. You've got this! 🚀🌟
Hi, thanks for the video. On question 22, given that the run EMR and Glue jobs, doesn't that require a orchestrator that can trigger both. As I understand Glue Workflows can only trigger Glue jobs, and not EMR jobs.
Within your Glue workflow, you can add a step that triggers an EMR cluster to perform specific tasks. This step can invoke the EMR API to start a cluster, run a specific job or task on the cluster, and then terminate the cluster once the task is complete.
For Q23 - Though Deep Glacier Archive is the lowest and cost effective, "The new storage solution must continue to provide high availability." - It is not having high availability as Glacier Flexible Retrieval. In this case, the answer can be B) right
Q36 should be A&D. Why D is import because they are facing performance issues and large number of partitions are already exists. As per my suggestion, we only need to apply index and change the format to Parquet.
I guess the catch here is that you just invoke the sql query with lambda - without waiting for it to finish. Lambda returns query ID and then in the Step Functions you wait until the query is executed, which is checked by polling the Athena API
Hello thanks for the questions and answers, sometimes you explain other options and for correct option you are just saying watch the video instead of that atleast could you add one or two line more on correct option? Thanks.
While the setup in option B is mostly correct, the subscription filter should be created in the production account, not the security account. The subscription filter is responsible for forwarding logs from CloudWatch Logs in the production account. I corrected the answer to D for this question.
@@sthithapragnakk Thank you so much for reply. I have recently took the Subject Matter expert training course on Skillbuilder that gives me an understanding of cognitive complexities on questions specific to Certification levels
@@sthithapragnakk totally agree with D for the followings: 1. Destination data stream be must created in Security account as destination 2. IAm role and a trust policy to be created in Security account and that is assumed to be assumed by Production account. 3. Cloudwatch logs subcription filter must be created in Production AWS for dispatching logs to Security counterpart
Could you please upload corrected video, By looking at people and your comments looks like some answers in the video are incorrect for the person who is beginner it is very difficult to recapitulate the correct one
Thanks sir for uploading this questions. Just wanted to check with all, the quality of the video is not good, I am not able to read the question, is it for all ?
in next 1-2 months, I will be done covering AWS exams, my next goal is to cover Azure & Google. If you cant wait that long and need the questions now, email me.
Answers for all the questions are here : Take the test before seeing the video: 1D2B3A4BD5B6B7B8B9A10A11C12A13B14B15A16B17AD18B19BA20B21C22A23C24A25C26DB27D28BA29C30B31CB32BC33B34D35C36AC37D38C39C40B
I recently achieved my database engineer certification, and I have to say, this channel was a game changer throughout my journey. The explanations are clear, the content is well-structured, and it covers everything you need to succeed. Highly recommend this channel to anyone preparing for certification. Thank you for the invaluable support!
Got the result yesterday, got 911/1000 . This series helped me a lot. You will see many questions from this series. There are new and unseen questions, but you will form a clear understanding by listening to the explanation and hence the different or unseen question will not stumble you. Thank you sir!! You are my Dronachary to me.
congrats! how long does it take to get results after you take the test?
@@sycamore9755 2 days
Correction - Question 34 - Answer D
Destination in Security Account: The destination Kinesis Data Stream should be created in the security AWS account, as this is where the logs will be analyzed.
IAM Role and Trust Policy: An IAM role must be created in the production account with a trust policy that allows CloudWatch Logs to assume this role. This enables CloudWatch Logs to send data to the Kinesis Data Stream in the security account.
Cross-Account Permissions: The IAM role in the production account must also have permissions to put records into the Kinesis Data Stream in the security account.
Subscription Filter: A subscription filter must be created in the production account to forward CloudWatch Logs to the Kinesis Data Stream.
I think the IAM Role and Trusted Policy should be in the Security account and grant permission for the Production account to assume it
I have completed my exam today, most of the questions are from your videos... thank you 🙏
That's fantastic news! I'm so happy to hear that my videos were helpful in your exam preparation. Congratulations on passing! I wish you all the best in your future endeavors.
For question 19: The answer would be B and C since the lambda functions have a 15 minutes timeout limit and it is mentioned each query can run for more than 15 minutes. Please share your thoughts.
Also much better to look if flow completed with MWAA then custom lambda colution which needs other check/config etc.
The lambda functions do not run the query, they only trigger it to run in Athena. Its the Athena query that takes 15 minutes or more which does not affect the function, with the step function, you can rerun the function after the query completes.
I believe B and E should be the answer since option C glue will add additional cost. Correct me if I am wrong.
I have passed my exam today.. got almost 50+ questions from your 4 videos. The explanation about the concepts and the question tips to find the answer helped me a lot.. Watching your videos will be good enough to pass the exam. thank you very much for all that you do to pass the exams... 🙏 Will continue watching your project videos ..
Q34. The correct answer should be D. The architecture should be like this: [AWS Prod Acct CW Log] ---- filter and stream to --> [ AWS Security KDS ]. The KDS should be in the security account.
yes the correct answer is D.
I passed the exam with flying colours 🎉 I completely recommend this channel . There was around 45 to 47 questions coming from these videos . Thanks a lot for sharing and for methodologies to find the right answer
Congratulations! Were they exact same or similar methodologies?
@@AnnaVarma-c9w 45 over the 65 questions were same as in the videos
are you just promoting the channel?
@@AnnaVarma-c9w as said 45 to 47 was exact same
@@shashankshekhar7659 I am not promoting just the channel , I am just a watcher that see the result everyday using the channel
Thanks @sthithapragnakk - Me & My friend took exam yesterday and we Cleared the exam of DEA - C01, we got around 40-45 questions are from these videos only. Thanks a lot😊
I have completed my certification today, got 45 questions from the list .. thank you
Thanks a lot.. !!! I completed my certification. Most of the questions were from your video.
Hi there, did you just use the videos or any other additional study material?
Congratulations on completing your certification! 🎉 I'm so glad the videos helped you with the exam. Thanks for sharing your success, and best of luck with all your future endeavors! 😊🙌
Hello @sthithapragna, I have Passed my DEA-CO1 Exam last week, thank you for the Dumps it really helped me to achieve this milestone. This videos covered most of the questions in the exam, Thanks for the explanation you gave for all the questions.
Congratulations on passing your DEA-C01 exam! 🎉 That's a fantastic accomplishment, and I'm thrilled to have played a part in your success. It's incredibly rewarding to hear that my dumps and explanations were valuable in your preparation.
I'm glad that the questions covered in my videos aligned well with the exam content. It's always my goal to provide the most relevant and helpful information for those working towards their AWS certifications.
Please don't hesitate to reach out if you have any further questions or need any assistance as you continue your AWS journey. I wish you continued success in your data engineering career!
failed my first attempt and stumble on this video... wow. wow... Feeling much more equiped now :) 😇 . Thank you
You got this! All the best.
@@sthithapragnakk had to comeback here. wrote it today and passed!. Thanks alot, channel really helpful.
I have completed my exam today, thank you for giving valuable information and explanation on how to answer the question.
for question 27, i think it shoud go to C, because C is correct. (KDS -> Redshift)
D is wrong as it has more operational overhead (KDS -> KDF -> S3 -> Redshift)
Yes C seems to be pointing to Redshift streaming ingestion.
Yes C is correct not D because Kinesis Data Firehose can only perform near real-time processing and NOT real-time data streams processing.
the auto refresh for streaming data increases operational overhead
@@priyankasharma9882 the question states near realtime insights
Thank you a lot for this dump! I passed my exam last week and your videos were very helpful. During the exam the explanation you gave for the questions came to my mind and I could finish the exam in less than 40 minutes.
Hi sthithapragna. Thank you so much. I cleared AWS DE certification. Almost 40 questions were from this playlist. Thank you team.
Thank you so much! 90% of the exam questions were from the PDF. Appreciate the explanations in this video.
Where is the PDF link?
can you share the pdf?
Question 36 :
Create an AWS Glue Partition Index and Enable Partition Filtering (Option A):
Partition Index:
Create an index for the partitions in the AWS Glue Data Catalog.
This index helps optimize partition pruning during query planning.
Partition Filtering:
Enable partition filtering in your Athena queries.
Specify the partition key values in the WHERE clause of your SQL queries.
This instructs Athena to scan only the relevant partitions, enhancing query performance1.
Bucket the Data Based on a Common Column (Option B):
Bucketing:
Organize the data in the S3 bucket based on a column that the data have in common.
Use a consistent hashing algorithm to distribute data evenly across buckets.
This allows Athena to efficiently filter and read only relevant data during query execution.
WHERE Clause:
In your user queries, use the common column in the WHERE clause.
This ensures that Athena scans only the relevant buckets, reducing query planning time.
How does Athena projection based on s3 prefix will help ?
Athena partition projection based on the S3 prefix can be a powerful tool for optimizing query performance, particularly when dealing with datasets that are highly partitioned across many folders in S3. Normally, Athena queries that involve partitioned tables need to load partition metadata from the AWS Glue Data Catalog before executing the query. If there are a large number of partitions, this can significantly slow down the query startup time as each partition's metadata must be fetched and processed. With partition projection, Athena generates the partition metadata dynamically based on the configuration you provide, such as using prefixes, patterns, or ranges. This eliminates the need to fetch and load partition metadata from the Glue Data Catalog for each query.
Very useful, many questions are covered from the exam here.
For 27 it should be C check redshift developer guide
Which is right answer ?
Agreed
#21 AWS Glue Data Quality wasn't created for detecting and/or obfuscating PII, It feels like a decoy answer and B seems more appropriate as you can create a glue job transformation layer to obfuscate PII visually for that purpose
True. Use Glue Studio's Detect PII transform to identify and redact columns containing PII and then trigger the state machine.
i agree with you. I would go with B
@@amir_ob me too
What do you guys think about Q22? The question says current ETL workflow uses AWS Glue + AWS EMR. If we use AWS Glue workflow it cannot trigger jobs on EMR so A shouldn't be the correct answer. It should be B or D. Please help me here.
@@OACisco While Glue Workflows can handle basic orchestration of Glue jobs and crawlers, they: Lack support for integrating EMR steps. Have limited flexibility for complex workflows with branching, retries, or additional service integration.
D is a third party service , and may require additional effort.
So I would go with Step functions
Q27. The correct answer should be C. See "Getting started with streaming ingestion from Amazon Kinesis Data Streams" from AWS Redshift documentation.
Yes
@sthithapragnakk thank you. I passed my aws data engineer associate last week. I like that you give detailed explanation of why an answer is correct or wrong.
can you share your path to the success ? on the videos ?
#34 Answer "A" is incorrect. It proposes setting up the destination stream in the prod account which does not align with the requirement to analyze logs in the security account.
The correct answer is "D" - Create a destination data stream in the security AWS account. Create an IAM role and a trust policy to grant CloudWatch Logs the permission to put data into the stream. Create a subscription filter in the prod AWS account.
Why this option?
Account: By creating the Kinesis Data Stream in the security AWS account, you ensure the logs are stored directly within the env intended for security log analysis. This setup centralizes security log management and enhances data security and governance.
IAM Role and Trust Policy: This role and policy setup enables the prod account to securely send logs to the Kinesis Data Stream in the security account. The trust policy allows the prod account to assume the role, thus granting necessary permissions to put data into the stream.
Subscription Filter in Prod Account: The subscription filter in CloudWatch Logs of the prod account forwards the logs to the Kinesis Data Streams in the security account. This filter specifies wich log data send, effectively routing the required logs for security analysis.
You are correct. Answer is D
Thanks, it's really. I pass my proof. Mil y Mil gracias
Congratulations on passing your exam! 🎉 I'm so happy to hear that you made it through. ¡Mil y mil gracias a ti por compartir tu éxito! Wishing you the best in your future endeavors! 😊
22) Stepfunction , EMR no orchestrate in Glue Workflow
Within your Glue workflow, you can add a step that triggers an EMR cluster to perform specific tasks. This step can invoke the EMR API to start a cluster, run a specific job or task on the cluster, and then terminate the cluster once the task is complete.
@@sthithapragnakk Hello Sir, thank you for a detailed video and lucid explanation on AWS DEA exam.
@@sthithapragnakk Could you specify how to do that? Thank you.
Hi ,I have passed aws data engineer exam, Thank you for these dumps
Congratulations on passing the AWS Data Engineer exam! That's a remarkable accomplishment, and I'm truly honored to have played a part in your success. I'm glad my content proved valuable in your preparation.
I strive to create comprehensive and informative resources that simplify complex concepts and help individuals like yourself grasp the intricacies of AWS services. Your success is a testament to the effectiveness of those efforts.
I wish you all the best in your future endeavors as an AWS Data Engineer. Please don't hesitate to reach out if you have any further questions or require additional assistance along the way.
@Nikhil the dumps cover most of the questions ? i am passing the exam in few hours
@@elvisbrahi2523How was your experience?
Awesome video . 60+ question from here . Thanks a lot sthithapragna. Cleared xam in less than 50 minutes🤣. Guys Please watch all 6 videos. No need to buy udemy course.
That's amazing-congratulations on clearing the exam in less than 50 minutes! 🎉 I'm so glad to hear that the videos helped you so much and that you found them useful. Your recommendation means a lot, and I'm sure others will appreciate the advice. Keep up the fantastic work, and feel free to reach out if you need anything else. You've got this! 🚀🌟
¡Gracias!
You are welcome and thank you supporting the channel
Hi, thanks for the video. On question 22, given that the run EMR and Glue jobs, doesn't that require a orchestrator that can trigger both. As I understand Glue Workflows can only trigger Glue jobs, and not EMR jobs.
Within your Glue workflow, you can add a step that triggers an EMR cluster to perform specific tasks. This step can invoke the EMR API to start a cluster, run a specific job or task on the cluster, and then terminate the cluster once the task is complete.
Can you please double-check the Question 39. Answer should be A. (Most operationally efficiently)
Thank you. I Aced my exam
For Q23 - Though Deep Glacier Archive is the lowest and cost effective, "The new storage solution must continue to provide high
availability." - It is not having high availability as Glacier Flexible Retrieval. In this case, the answer can be B) right
By high availability means High SLA and not retrieval time, so only One Zone tier does not provide high availability.
Q36 should be A&D. Why D is import because they are facing performance issues and large number of partitions are already exists. As per my suggestion, we only need to apply index and change the format to Parquet.
NB 17:
surly its A and B, we need to prepare the data for analytics?
For Question #19 - Can we invoke long running Athena queries through lambda, given the max timeout of lambda is 15 mins? Thank you!
I guess the catch here is that you just invoke the sql query with lambda - without waiting for it to finish. Lambda returns query ID and then in the Step Functions you wait until the query is executed, which is checked by polling the Athena API
Seems they want us to assume that's what the Lambda is actually doing
Q22. Can we trigger EMR from Glue workflows? if not Step Functions would be the answer.
Question 21 and 23 was part of my SAA-C03 Exam....
21 - B is the answer
39 - Can Glue crawler read views?
Glue crawlers will typically discover and catalog those tables and their schemas, but they won't specifically catalog the views
Should the answer be a?
Hi @sthipragna,Can you please share the PDF if possible?
What is the channel for AWS projects that is referred to in the video ?
SAme channel - ua-cam.com/play/PL7GozF-qZ4KcGI-4btMsdKVl31kRB5Ce4.html, ua-cam.com/play/PL7GozF-qZ4KdoImDrGFM5-sk153URZJfT.html
Thank you for this set of questions.Total how many questions are there ? When are you going to upload next set ?
Total 80, 40 more will record them today
@@sthithapragnakkthank you
Hello thanks for the questions and answers, sometimes you explain other options and for correct option you are just saying watch the video instead of that atleast could you add one or two line more on correct option? Thanks.
I cleared the exam. got around 15 questions from outside.
Congratulations 🎉🎊
congratulations!! I am also appearing in few days? how many questions came from these 2 sets?
Hi, apart from this question did you went through any udemy course?
did you get most questions from this set?
@sekhar8938 any tips or courses that I need for start prepping up for this exam
Can you pls share the pdf of these questions
Anyone please clarify answer for Q21. Is it B or C?
Kindly help rectify the connect answer for question #34 should be B? Not A?
While the setup in option B is mostly correct, the subscription filter should be created in the production account, not the security account. The subscription filter is responsible for forwarding logs from CloudWatch Logs in the production account. I corrected the answer to D for this question.
@@sthithapragnakk Thank you so much for reply. I have recently took the Subject Matter expert training course on Skillbuilder that gives me an understanding of cognitive complexities on questions specific to Certification levels
@@sthithapragnakk totally agree with D for the followings:
1. Destination data stream be must created in Security account as destination
2. IAm role and a trust policy to be created in Security account and that is assumed to be assumed by Production account.
3. Cloudwatch logs subcription filter must be created in Production AWS for dispatching logs to Security counterpart
Sir , Could you pls upload videos for AWS Devops engineer learning ?
Hi,Can you please share the pdf if possible
Any update on databricks exam?
Which exam and what update are you expecting?
@@sthithapragnakk ua-cam.com/video/AXQF6cq-t8g/v-deo.html any update with new dump
@@sthithapragnakk AWS Certified Data Engineer Associate Exam Questions Dumps Latest like FEB /MARCH 2024
Could you please upload corrected video, By looking at people and your comments looks like some answers in the video are incorrect for the person who is beginner it is very difficult to recapitulate the correct one
sir whem can we expect next set of questions other than 80
as soon as the questions are available to me.
@@sthithapragnakk thank u sir
Can You Please reply.
When you are uploading february dumps for Aws Cloud Practitioner Exam ?.
I reallly need those questions.
I dont have any new questions yet, you dont need new questions in my opinion, just watch the last 6 months and you are good to go.
Thanks sir for uploading this questions.
Just wanted to check with all, the quality of the video is not good, I am not able to read the question, is it for all ?
Change the quality of the video to higher resolution(1080) and try.
very useful, can you do the same for google professional data engineer certification thank you
in next 1-2 months, I will be done covering AWS exams, my next goal is to cover Azure & Google. If you cant wait that long and need the questions now, email me.
Sir can you upload more questions on this certifications please
I will upload as soon as I receive new questions, stay tuned
@@sthithapragnakk Are the 80 questions enough to perform well in the exam? Because I have scheduled the exam tomorrow
@@rishienugala are you pass the exam ? this bundle is it enough ?
@@MohamedAminBelgasem yes
can u do the same for snowflake
Question 21 - Are you sure option C is the correct response. I thought you couldnt use glue data quality for PII
Can use
es ilegal que youtubers tech de la india tengan buena calidad de audio?
Buen video por cierto
I passed my exam (first AWS exam) @sthithapragna your videos and discussions were super super helpful! Thank you so much!
Great job! Congratulations 🎊🎉
Can I trust this to Pass exam?
please share this document
Send an email - sthithapragnasya@gmail.com
thank you so much @sthithapragnakkv ❤, I passed with 886/1000
Do you have dumps.
Answers for all the questions are here : Take the test before seeing the video:
1D2B3A4BD5B6B7B8B9A10A11C12A13B14B15A16B17AD18B19BA20B21C22A23C24A25C26DB27D28BA29C30B31CB32BC33B34D35C36AC37D38C39C40B
Hi @sthipragna,Can you please share the PDF if possible?
Hi @sthithapragna,Can you please share the PDF if possible?