✌ KnowledgeIndia is an initiative to teach Cloud and related technologies in an easy & practical manner. We believe in jargon-free discussion. 👍 There are many videos on our channel through which you can learn Cloud for free. If you find our videos helpful, then please share it & help others as well. If you would like to be part of this initiative, connect with us and send a message (links given below). 👉 Join our Hands-on CLOUD TRAINING - www.knowledgeindia.in/p/hands-on-cloud-training-real-world.html 👉 Connect with us for CLOUD CONSULTING requirements. Best way is to connect on LinkedIn and send a direct message. 👉 Become a UA-cam Channel Member and get many benefits - www.knowledgeindia.in/p/membership-benefits.html ☕ You can support us here - www.buymeacoffee.com/knowledgeindia ☕ You can support us here - ko-fi.com/knowledgeindia ▬▬▬ 🔰 L E A R N I N G C L O U D ⤵ ▬▬▬ 👉 Subscribe to KI UA-cam Channel - ua-cam.com/users/knowledgeindia 👉 Receive email alerts - bit.ly/ki-google-group 👉 Join our LinkedIn Group - bit.ly/ki-linkedin-group 👉 Join UA-cam MEMBERSHIP - ua-cam.com/channels/zpHRBVnkzBfSsXostYuW1g.htmljoin 👉 Launch your CLOUD CAREER - www.knowledgeindia.in/p/launch-your-cloud-career.html 👉 All our Video Tutorials - www.youtube.com/@knowledgeindia/videos 👉 Guidance on Cloud Certification - ua-cam.com/video/7G_qJcCk7Zk/v-deo.html 👉 Hands-on AWS Training - www.knowledgeindia.in/p/hands-on-cloud-training-real-world.html
Hello sir, i would like to tell you that your channel is number 1 for AWS cloud, only just learning the AWS services are not enough, at enterprise level we should have good understanding how to implement these services in the environment and the way you explain is just awesome.
I have watched videos from Udemy and ACG but I haven't found video like your channel... Awesome Video Sir....You are Guru of Cloud..... Please upload videos on Route 53
@@knowledgeindia absolutely!! Manipulators charge Bomb for Aws training..and U are doing it out of your kindness. I don't know how deeply should I thank you for your generosity
Thank you So much for your excellent videos. This really helps in clarifying even the small doubts. I know these videos are created 2 years back and as you know AWS is rapidly updating and creating new features every now and then , so this video also may need an update at some places. For instance, different types of Storage Classes.Also, Reduced Redundancy are now stored in >=3 AZ's rather than 2.
Thanks .. If you have got benefited from this channel, please write about it at -- aws-tutorials.blogspot.in/p/do-you-like-it.html You can also look at Live session details on the same page. SUBSCRIBE & SHARE with your friends please. Follow our FB page -- fb.me/AWStutorials
Hi Sandeep, thanks for your kind words. Please connect on LinkedIn (write a TESTIMONIAL if possible) and SUBSCRIBE to our blog and youtube channel. I shall be bringing more content soon. Thanks a lot. Also, please look at our playlists for more videos --- (ua-cam.com/users/knowledgeindiaplaylists)
Thank you Miguel .. I hope to provide more and more support. Please spread the word at Peru. You can also join our Live Sessions, details at aws-tutorials.blogspot.in/p/page1.html
Thanks Adarsh. I would request to look at our playlists for SA & SysOps here -- ua-cam.com/video/ywHFXfuJoSU/v-deo.html &&& ua-cam.com/video/UFSH-KuDGj8/v-deo.html Connect with me on LinkedIn to read interesting important AWS updates --- www.linkedin.com/in/knowledgeindia Please follow my FB page fb.me/AWStutorials & Twitter - twitter.com/#!/knowledge_india And for AWS exercises, you can refer our blog -- aws-tutorials.blogspot.com/
Another awesome session from you !! Thanks sirji !! 1. @ 13:35 .. u talked about versioning and gave an example abc.doc and you mentioned saying if you had create an another file with the same name, it doesn't get over written. PLs correct my understanding here , i assumed versioning is only when you edit and overwrite on the same file it maintains the old file? 2. When cross region rep enabled why does amazon use public network, Where in AWS promotes services like cloud front and transfer acceleration [which uses amazon internal network for speedy and reliable data transfers] 3. In s3 the data is stored in lexicographical order and amazon advises to use random prefixes, so that the data will be spread across ? why is that ? If its not spread across , the data is going to be clustred in one place or even in the same rack right , which might give a better through put .
1. There is no Edit in S3. if you upload another file of same name, it is overwritten if versioning is not enabled. A new version is created if version is maintained. 2. Between 2 regions Private network is not guaranteed, though AWS tries to route via Private network when possible (hence consider is Public). Also, think of CloudFront as Amazon network from a region to Edge Locations across the world. 3. If all the files are in the same partition, the amount of READ it can give gets limited. hence, spreading the files to different partitions makes sense and hence requests also get distributed. Good questions. Do great.
I have a requirement that I need to create two public S3 bucket where I need to allow external users to upload their files on one bucket with write access and other for read only access to view.. i have added the bucket policy and made public Now my question is where I have to ask external users to upload? People who dont have AWS access can upload? Any option like I can give some s3 endpoint where they just click and upload? COuld you please help..
Question: The standard and Standardd infrequent already provides 3 copies of the objects in a Region, Why people again go for 'cross region replication'? you want to 4th copy ...this questions data stability on AWS, please clarify?
Thank you for this video, and I appreciate your time and effort to educate others. I have a question on S3. In case of Standard S3, if a bucket/object gets replicated twice within a region and thus 3 copies of the data is available in different AZs in a region, what happens when a region (like Mumbai) has got only 2 AZs under it? Does it not provide standard S3 services?
Hi, Could you please share more details of latest Lifecycle Management Policy with new S3 Storage Class? Standard IA Intelligent Tiering *Object can be moved after 1 day of creation.* One Zone-IA Glacier *Object can be moved after 1 day of creation.* Glacier Deep Archive *Object can be moved after 1 day of creation.*
Beautiful Tutorial.Thank you very much. Please help me on Create a new bucket lifecycle policy to “expire current version of object after 396 days from object creation”
Thanks for your appreciation. You can support our initiative of Free Practical Cloud Tutorials by sharing this video with your friends on Social channels, whatsapp etc. If it helped you solve a problem and you would like to applaud us, click the Applaud button :) For regular 1-1 interaction with me, check our Membership - ua-cam.com/channels/zpHRBVnkzBfSsXostYuW1g.htmljoin ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thanks for the video which is so so helpful. I have a scenario like we need to download files from S3 bucket to Linux server(ETL). So we can download files using cp command or s3api, but I am looking for a method/idea how to rename or place a flag for the file in s3 bucket once we download it. So that when we scan for new files in the next run, we should be listing only un-flagged/new files alone. Please suggest if you have came across the situation or solution. Thanks
Welcome SM, can you support us by sharing the video in your professional circle please? For your problem, 2 ways - 1. after downloading the file, move it to a folder in that bucket e.g. /archived/ ... Next time, only look for files in base bucket and not inside any folder. 2. Use S3 sync command, this will bring only the new files to your local (but if you have removed earlier used files from local, it will download again). docs.aws.amazon.com/cli/latest/reference/s3/sync.html Choose from one above. Thanks for your support, keep sharing and loving us. :) Do join Linkedin group www.linkedin.com/groups/10389754/
Sure. I will share these videos on Linked-in and Thanks for the response. I have some limitations in above case. The first limitation is, we do not maintain history on local/destination system. Hence we can not use synch.The second limitation is, the source file will be accessed by more than one system different intervals. Hence if I move the file to s3 bucket to archive s3 then other system would fail to download the file. Hence I was thinking to capture let downloaded file timestamp from S3 bucket and when we go for download next time, we can compare the list of files which are greater than this time and can download and store the latest timestamp again to your tracking file on your local system. I am just checking if we have any alternative/efficient way to handle the above cased with those two limitations.
Keep up this good work, easy to know the concepts & overview since am crawling on my AWS learning. One Q : Does Reduced Redundancy can replicate 2 AZs but others STD & STD IA can replicate across ALL AZ's or only upto 3 AZs ? Looking forward for Complete AD session in AWS , if I missed any source please let me know. Thanks, Great Job KI !
Thanks .. Sure, will do very soon. If you have got benefited from this channel, please write about it at -- aws-tutorials.blogspot.in/p/do-you-like-it.html SUBSCRIBE & SHARE with your friends please. Out FB page is -- fb.me/AWStutorials
I am bit confused b/w ACL and Bucket policy. Both can grant permissions to other AWS Accounts. I tried granting permission to another AWS account in ACL. But the other account holder cannot see my bucket in his account. Does these policies just provide CLI/API permissions to other accounts? And ACL provides all read/write(create, delete etc) permissions but bucket policy will provide specific permissions?
Though AWS does not write the reason behind this. I could guess following. When you enable CRR b/w buckets (e.g. b1 & b2), as the object PUT gets completed on b1 it starts getting copied to b2. Now, for bigger files this could take considerable amount of time to complete. In between, if the object on b1 gets deleted CRR could be in trouble hence to ensure completeness versioning is mandatory. We have many more videos on AWS topics, these are organized in playlists here -- ua-cam.com/users/knowledgeindiaplaylists Please help us to spread the knowledge to others as well; please SUBSCRIBE to our UA-cam Channel & LIKE and SHARE the videos if they helped you to learn. You can subscribe to our blog to receive useful AWS related content -- aws-tutorials.blogspot.com
great video thank you so much. How can i only restrict access to a file for 1-time only download to keep my customers from passing around the link and stealing information. so they view a video then try to come back, they will not be able to
Hello Anil, I have put detailed videos on IAM and S3 bucket policy. Please watch those videos --- ua-cam.com/video/DH47Pu2DQBU/v-deo.html & ua-cam.com/video/DXNS-EP9sXM/v-deo.html Looking for your support always, please let your friends know by SHARING this.
Thanks for the Support . This helps lot ...looking forward same support. I see most of the required content from your videos. Thanks for sharing the knowledge.
in Bucket Policies session, for a bucket or an object to give permissions, we give IAM user name. Suppose if we wana give for more than 10 users access. how we can do there? do we has to give all names in a row with comma seperator
Hi, Nice session. I have one question I have written java code for reading bucket objects but I am getting "not a valid key error ". Can you help me in this case.
A company needs to have their data stored on AWS. The initial size of data would be around of 500GB, with overall growth expected to go into 80TB over the next couple of months.The solutions must also be durable. Which of the following would be an ideal storage option to use for such a requirement?? A. DynamoDB B. Amazon S3 C. Amazon Aurora D. Amazon Redshift
If the S3 URL is myteam.myoffice.com.s3.amazonaws.com/india/state/city.xls then what is the object name and the bucket name? I can understand that the bucket name is "myteam.myoffice.com". Is the object name "city.xls" or "india/state/city.xls"?
You may want to join one of our upcoming trainings. Details are given here --- aws-tutorials.blogspot.in/2017/06/aws-solutions-architect-associate.html aws-tutorials.blogspot.in/2017/06/aws-sysops-administrator-associate.html Please let me know.
S3 bucket policy is related to permission, it cannot change the class. You can use Lifecycle policy which can change class of objects based on their creation date (but not access date). If you want to change based on access date, you will have to write custom logic using Lambda..
There is a video already on EC2 - ua-cam.com/video/NaQN0oV_gpY/v-deo.html Requesting you to check all the videos on Channel. You might find many of them useful to you.
Object access means the actual file stored in S3. Permission access is the accessing the ACL applied on that object/file. Please share our videos if you liked it.
Thank you for your reply..I tried enabling both options for authenticated AWS user and tried to view through AWS user but its not working but when I tried enabling the object access for EVERYONE " AWS user " was able to view the file that is been stored by the one who uploaded the file.. How ?
ok.. I created a file and enabled object access and permission access for authenticated AWS user, but when I tried to open that file as an AWS user I was not able to open the file but when I enabled object access for EVERYONE as a AWS user I was able to open that file...
Hello, Very nice explanation. I am facing a problem its a simple static website hoisting with s3 bucket for root domain & subdomain. I created2 buckets same as the domain name & sub domain name (vexample.com & www.vexample.com). Created bucket policy { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::vexample.com/*" } ] } when I click on the endpoint of this bucket it works but in the bucket www.vexample.com which has been redirected to vexample.com when I click on the endpoint it says sorry we could not find vexample.com(goes to internet provider search). please help me out in sorting this Thanks
Sir i m creating this policy for replication but it give me error that invalid resource name "{ "Version":"2008-10-17", "Id":"", "Statement":[ { "Sid":"Stmt123", "Effect":"Allow", "Principal":{ "AWS":"arn:aws:iam::549979776016:root" }, "Action":["s3:ReplicateObject", "s3:ReplicateDelete"], "Resource":"arn:aws:s3:::*" } ] }" I used bucket name instead of * but result are same.
You have step-by-step details here, please follow - docs.aws.amazon.com/AmazonS3/latest/dev/crr-walkthrough-2.html See closely, in permission it should be bucket/* Share the video, if you received help from our channel. thanks
Its explained here - ua-cam.com/video/DH47Pu2DQBU/v-deo.html I would request to look at our playlists for AWS Certifications --- Solutions Architect - ua-cam.com/video/ywHFXfuJoSU/v-deo.html &&& SysOps Administrator - ua-cam.com/video/UFSH-KuDGj8/v-deo.html ++++++++++++++++++++++++++++++++++++++++ I have answered lot of AWS Interview questions in LIVE sessions here -- ua-cam.com/play/PLTyrc6mz8dg_tEexS22k_gmssDmkWkEMd.html Connect with me on LinkedIn to read interesting AWS updates & Practical Scenario Questions --- www.linkedin.com/in/knowledgeindia Don't miss any updates, please follow my FB page fb.me/AWStutorials & Twitter - twitter.com/#!/knowledge_india And for AWS exercises & case-studies, you can refer our blog -- aws-tutorials.blogspot.com/ ++++++++++++++++++++++++++++++++++++++++
I keep getting advertisement of Amity for the blockchain....i will say dont spend money on blockchain...it will never succeed..the amity is a stupid institute making everyone a fool
✌ KnowledgeIndia is an initiative to teach Cloud and related technologies in an easy & practical manner. We believe in jargon-free discussion.
👍 There are many videos on our channel through which you can learn Cloud for free. If you find our videos helpful, then please share it & help others as well. If you would like to be part of this initiative, connect with us and send a message (links given below).
👉 Join our Hands-on CLOUD TRAINING - www.knowledgeindia.in/p/hands-on-cloud-training-real-world.html
👉 Connect with us for CLOUD CONSULTING requirements. Best way is to connect on LinkedIn and send a direct message.
👉 Become a UA-cam Channel Member and get many benefits - www.knowledgeindia.in/p/membership-benefits.html
☕ You can support us here - www.buymeacoffee.com/knowledgeindia
☕ You can support us here - ko-fi.com/knowledgeindia
▬▬▬ 🔰 L E A R N I N G C L O U D ⤵ ▬▬▬
👉 Subscribe to KI UA-cam Channel - ua-cam.com/users/knowledgeindia
👉 Receive email alerts - bit.ly/ki-google-group
👉 Join our LinkedIn Group - bit.ly/ki-linkedin-group
👉 Join UA-cam MEMBERSHIP - ua-cam.com/channels/zpHRBVnkzBfSsXostYuW1g.htmljoin
👉 Launch your CLOUD CAREER - www.knowledgeindia.in/p/launch-your-cloud-career.html
👉 All our Video Tutorials - www.youtube.com/@knowledgeindia/videos
👉 Guidance on Cloud Certification - ua-cam.com/video/7G_qJcCk7Zk/v-deo.html
👉 Hands-on AWS Training - www.knowledgeindia.in/p/hands-on-cloud-training-real-world.html
Hello sir, i would like to tell you that your channel is number 1 for AWS cloud, only just learning the AWS services are not enough, at enterprise level we should have good understanding how to implement these services in the environment and the way you explain is just awesome.
Glad it helped! I am sure you will like our recently released KMS MasterClass video as well, check it here - ua-cam.com/video/8ailVnVPigk/v-deo.html
I really benefited with this video. Detailed explanation. Hope you will do more like this.
Thank you for such a detailed video. Appreciate the effort in making this, really helped me understand and prepare for a devops interview
Thanks Manish. Requesting you to share this video with your friends / colleagues.
I have watched videos from Udemy and ACG but I haven't found video like your channel... Awesome Video Sir....You are Guru of Cloud..... Please upload videos on Route 53
Sure, will do it Manish.
Thanks for appreciating it. Please support us by telling your friends about our videos. Thank you.
Nice one got it. Thanks KI
what a great Channel.....God bless you Sir!! you're doing an excellent job
Thanks Mithun. You could help us my writing about our channel and sharing the link on LinkedIn , Facebook etc.
@@knowledgeindia absolutely!! Manipulators charge Bomb for Aws training..and U are doing it out of your kindness. I don't know how deeply should I thank you for your generosity
Thank you Knowledge India, it's really a great stuff and you tutorials are very informative and clear!!👍💐
Thanks Jagdish. I request you to kindly share it on LinkedIn and help us.
great video on s3
Glad you enjoyed it. Please share it in your circle as well.
Superb video. Thanks for uploading.
Thank you So much for your excellent videos. This really helps in clarifying even the small doubts. I know these videos are created 2 years back and as you know AWS is rapidly updating and creating new features every now and then , so this video also may need an update at some places. For instance, different types of Storage Classes.Also, Reduced Redundancy are now stored in >=3 AZ's rather than 2.
Very nice and crisp information. Thank you.
Thanks Aditya. Please help us by sharing the video and channel with your friends and on LinkedIn/FB.
You are a Great Teacher Sir
Thanks a lot. Please share with your friends on FB / LinkedIn..
its a very useful video. Thanks for this.
thanks a lot. Please do check out our playlist for more easy AWS videos.. please share and subscribe to get new AWS videos..
Awesome explanation. Sweet and short! :)
Thanks .. If you have got benefited from this channel, please write about it at -- aws-tutorials.blogspot.in/p/do-you-like-it.html You can also look at Live session details on the same page.
SUBSCRIBE & SHARE with your friends please. Follow our FB page -- fb.me/AWStutorials
Great presentation and very well explained
Very Very Thanks Knowledge India........Great Help :)
Hi Sandeep, thanks for your kind words. Please connect on LinkedIn (write a TESTIMONIAL if possible) and SUBSCRIBE to our blog and youtube channel. I shall be bringing more content soon. Thanks a lot.
Also, please look at our playlists for more videos --- (ua-cam.com/users/knowledgeindiaplaylists)
Please SUBSCRIBE to my channel to get all the video updates.
Thanks for the video KI !👍
Thank you :) .. I hope you get benefited more from our practical videos. ... Show your support by sharing the videos on LInkedIn & FB..
thank you so much
You're welcome!
You are doing a good job! Keep up the passion. It is being appreciated! :)
Thanks Karan. you can also check out our playlists for more videos on AWS.
Very nice!
Tjhank you very much from Peru!!!!
Thank you Miguel .. I hope to provide more and more support. Please spread the word at Peru.
You can also join our Live Sessions, details at aws-tutorials.blogspot.in/p/page1.html
Easy to understand sir thank u 👍🏻
Thanks Mano. If you liked the videos, please share it with your friends as well on FB & LinkedIn.
excellent explanation
Thanks Adarsh. I would request to look at our playlists for SA & SysOps here -- ua-cam.com/video/ywHFXfuJoSU/v-deo.html &&& ua-cam.com/video/UFSH-KuDGj8/v-deo.html
Connect with me on LinkedIn to read interesting important AWS updates --- www.linkedin.com/in/knowledgeindia
Please follow my FB page fb.me/AWStutorials & Twitter - twitter.com/#!/knowledge_india
And for AWS exercises, you can refer our blog -- aws-tutorials.blogspot.com/
Another awesome session from you !! Thanks sirji !!
1. @ 13:35 .. u talked about versioning and gave an example abc.doc and you mentioned saying if you had create an another file with the same name,
it doesn't get over written. PLs correct my understanding here , i assumed versioning is only when you edit and overwrite on the same file it maintains the old file?
2. When cross region rep enabled why does amazon use public network, Where in AWS promotes services like cloud front and transfer acceleration [which uses amazon internal network for speedy and reliable data transfers]
3. In s3 the data is stored in lexicographical order and amazon advises to use random prefixes, so that the data will be spread across ? why is that ? If its not spread across , the data is going to be clustred in one place or even in the same rack right , which might give a better through put .
1. There is no Edit in S3. if you upload another file of same name, it is overwritten if versioning is not enabled. A new version is created if version is maintained.
2. Between 2 regions Private network is not guaranteed, though AWS tries to route via Private network when possible (hence consider is Public). Also, think of CloudFront as Amazon network from a region to Edge Locations across the world.
3. If all the files are in the same partition, the amount of READ it can give gets limited. hence, spreading the files to different partitions makes sense and hence requests also get distributed.
Good questions. Do great.
really awesome tutorial
Thanks. Also, please see our playlist for SA & SysOps. Share our videos if they are helpful.
sure bro..
Can I move from glacier vaults to new bucket ?
superb explanation thank you.
if u dont mine please do one more topic on vpn to vpn connection from one region to other region
AWS natively doesn't give VPN across region. You will have to use third party s/w to do that.
I have a requirement that I need to create two public S3 bucket where I need to allow external users to upload their files on one bucket with write access and other for read only access to view.. i have added the bucket policy and made public Now my question is where I have to ask external users to upload? People who dont have AWS access can upload? Any option like I can give some s3 endpoint where they just click and upload? COuld you please help..
Can we limit the storeage of s3 bucket to 5gb is there any provision for this?
Question: The standard and Standardd infrequent already provides 3 copies of the objects in a Region, Why people again go for 'cross region replication'? you want to 4th copy ...this questions data stability on AWS, please clarify?
When an org wants 99.999% of availability they store data across 2 regions, this would help in scenarios if one region becomes unavailable.
Share our videos if they have helped you and you have got answers.
Thank you for this video, and I appreciate your time and effort to educate others.
I have a question on S3. In case of Standard S3, if a bucket/object gets replicated twice within a region and thus 3 copies of the data is available in different AZs in a region, what happens when a region (like Mumbai) has got only 2 AZs under it? Does it not provide standard S3 services?
Good observation. For EC2 alone, you see the number of AZs as 2. For managed services like S3, SQS, DynamoDB etc., there are 3 AZs in all the regions.
Thank you. By the way, how to register for your professional course?
Hi, Could you please share more details of latest Lifecycle Management Policy with new S3 Storage Class?
Standard IA
Intelligent Tiering *Object can be moved after 1 day of creation.*
One Zone-IA
Glacier *Object can be moved after 1 day of creation.*
Glacier Deep Archive *Object can be moved after 1 day of creation.*
Beautiful Tutorial.Thank you very much.
Please help me on Create a new bucket lifecycle policy to “expire current version of object after 396 days from object creation”
You should be able to specify 396 days for Delete condition..
and same way, you have option for versions there as well.
Hello Sir, Is this AWS appearance and Newer is something different?? I mean new appearance of AWS bucket creation looks like Blue & Black appearance.
Yes , UI keeps changing.
Quite old now. Need to be updated. Still good one.
Thanks for your appreciation. You can support our initiative of Free Practical Cloud Tutorials by sharing this video with your friends on Social channels, whatsapp etc.
If it helped you solve a problem and you would like to applaud us, click the Applaud button :)
For regular 1-1 interaction with me, check our Membership - ua-cam.com/channels/zpHRBVnkzBfSsXostYuW1g.htmljoin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thank you
Thanks Amar. Pls do check out our playlist for more easy AWS videos.. please share and subscribe to get new videos..
thank you sir :)
Thanks for the video which is so so helpful. I have a scenario like we need to download files from S3 bucket to Linux server(ETL). So we can download files using cp command or s3api, but I am looking for a method/idea how to rename or place a flag for the file in s3 bucket once we download it. So that when we scan for new files in the next run, we should be listing only un-flagged/new files alone. Please suggest if you have came across the situation or solution. Thanks
Welcome SM, can you support us by sharing the video in your professional circle please?
For your problem, 2 ways -
1. after downloading the file, move it to a folder in that bucket e.g. /archived/ ... Next time, only look for files in base bucket and not inside any folder.
2. Use S3 sync command, this will bring only the new files to your local (but if you have removed earlier used files from local, it will download again). docs.aws.amazon.com/cli/latest/reference/s3/sync.html
Choose from one above.
Thanks for your support, keep sharing and loving us. :) Do join Linkedin group www.linkedin.com/groups/10389754/
Sure. I will share these videos on Linked-in and Thanks for the response. I have some limitations in above case. The first limitation is, we do not maintain history on local/destination system. Hence we can not use synch.The second limitation is, the source file will be accessed by more than one system different intervals. Hence if I move the file to s3 bucket to archive s3 then other system would fail to download the file. Hence I was thinking to capture let downloaded file timestamp from S3 bucket and when we go for download next time, we can compare the list of files which are greater than this time and can download and store the latest timestamp again to your tracking file on your local system. I am just checking if we have any alternative/efficient way to handle the above cased with those two limitations.
Well, i think what you are proposing would be best in this case.
Keep up this good work, easy to know the concepts & overview since am crawling on my AWS learning.
One Q : Does Reduced Redundancy can replicate 2 AZs but others STD & STD IA can replicate across ALL AZ's or only upto 3 AZs ?
Looking forward for Complete AD session in AWS , if I missed any source please let me know.
Thanks, Great Job KI !
Yes. RR does in 2 AZs.
Standard & Standard - IA does in 3 AZs.
I have udemy account but it sux ,learning from you tube is great
Our playlists should help you more
@@knowledgeindia yes I did check it thanks
Very old one can you please update or the diff one for I e s3diff class of storage like one zone and deep glacier etcc
Please check our channel, there is a new video for the same
Thanks for uploading such valuable video SIRJI....
Can you please share on ROUTE53 as well
Thanks .. Sure, will do very soon.
If you have got benefited from this channel, please write about it at -- aws-tutorials.blogspot.in/p/do-you-like-it.html
SUBSCRIBE & SHARE with your friends please. Out FB page is -- fb.me/AWStutorials
I am bit confused b/w ACL and Bucket policy. Both can grant permissions to other AWS Accounts. I tried granting permission to another AWS account in ACL. But the other account holder cannot see my bucket in his account. Does these policies just provide CLI/API permissions to other accounts? And ACL provides all read/write(create, delete etc) permissions but bucket policy will provide specific permissions?
By giving permission to other account's user, it won't show up in his S3 console. As you guessed, it only gives API/CLI access.
Thanks
@AWS can you upload video on this topic please Use nginx to Add Authentication to Any Application
Why is " versioning "mandatory in case of cross-region replication ? Could you please explain the theory behind it .. ? thanks in advance!
Though AWS does not write the reason behind this. I could guess following.
When you enable CRR b/w buckets (e.g. b1 & b2), as the object PUT gets completed on b1 it starts getting copied to b2. Now, for bigger files this could take considerable amount of time to complete. In between, if the object on b1 gets deleted CRR could be in trouble hence to ensure completeness versioning is mandatory. We have many more videos on AWS topics, these are organized in playlists here -- ua-cam.com/users/knowledgeindiaplaylists
Please help us to spread the knowledge to others as well; please SUBSCRIBE to our UA-cam Channel & LIKE and SHARE the videos if they helped you to learn.
You can subscribe to our blog to receive useful AWS related content -- aws-tutorials.blogspot.com
great video thank you so much. How can i only restrict access to a file for 1-time only download to keep my customers from passing around the link and stealing information. so they view a video then try to come back, they will not be able to
The Social MediaDiva Floyd you can generate presigned URL for your file and you can set the validity of that URL.
Please do SHARE and SUBSCRIBE if you this video helped you..
Hello, I have doubt in IAM policy Vs S3 bucket policy ? Can you explain on same
Hello Anil,
I have put detailed videos on IAM and S3 bucket policy. Please watch those videos --- ua-cam.com/video/DH47Pu2DQBU/v-deo.html &
ua-cam.com/video/DXNS-EP9sXM/v-deo.html
Looking for your support always, please let your friends know by SHARING this.
Thanks for the Support . This helps lot ...looking forward same support. I see most of the required content from your videos. Thanks for sharing the knowledge.
Sir, Please let me know if you are have similar coverage on Azure Cloud. I would like to join please. Thank you
Currently there are only a few videos on azure. Check our playlists. There will be something coming in 2 months' time
in Bucket Policies session, for a bucket or an object to give permissions, we give IAM user name. Suppose if we wana give for more than 10 users access. how we can do there? do we has to give all names in a row with comma seperator
as far as i understand your question ---- YES
Hi, Nice session. I have one question I have written java code for reading bucket objects but I am getting "not a valid key error ". Can you help me in this case.
Key is the unique id for an object. in the object URL, it is after the bucket name. Check its case sensitivity etc.
I had faced one interview question like What steps would you perform to revoke a user's access to S3? can you help me on this question.
Remove IAM permission and bucket policy which has given the access
everything in s3 changed now, can you upload latest s3 video
Good video, too many ads though. Had to watch 5 ads just to continue with your 25 minute video
useful video , but didn't explained clearly about cross regioning , can clearly explain cross regioning with 2 bucket names how it works ,
A company needs to have their data stored on AWS. The initial size of data would be around of 500GB, with overall growth expected to go into 80TB over the next couple of months.The solutions must also be durable.
Which of the following would be an ideal storage option to use for such a requirement??
A. DynamoDB
B. Amazon S3
C. Amazon Aurora
D. Amazon Redshift
Hi, can u please make a video on how to upload / download using transfer acceleration endpoint. Using console
How about doing this in the new console.. thanks for the tutorial
concepts are same. just a bit of UI change.
If the S3 URL is myteam.myoffice.com.s3.amazonaws.com/india/state/city.xls then what is the object name and the bucket name? I can understand that the bucket name is "myteam.myoffice.com". Is the object name "city.xls" or "india/state/city.xls"?
"india/state/city.xls" is Object Key and this should be unique. It is case sensitive as well.
You may want to join one of our upcoming trainings. Details are given here ---
aws-tutorials.blogspot.in/2017/06/aws-solutions-architect-associate.html
aws-tutorials.blogspot.in/2017/06/aws-sysops-administrator-associate.html
Please let me know.
I'm trying to create bucket policy that change storage class 90 days after the bucket has been accessed. Can you help me?
S3 bucket policy is related to permission, it cannot change the class.
You can use Lifecycle policy which can change class of objects based on their creation date (but not access date).
If you want to change based on access date, you will have to write custom logic using Lambda..
Please give the video about s3 and ec2 costing
there is one on EC2 costing already. Please see our playlists.
Hi team, Requesting you to make a video on Ec 2 quickly
There is a video already on EC2 - ua-cam.com/video/NaQN0oV_gpY/v-deo.html
Requesting you to check all the videos on Channel. You might find many of them useful to you.
Hello, could you please let me know the difference between object access and permission access
Object access means the actual file stored in S3. Permission access is the accessing the ACL applied on that object/file.
Please share our videos if you liked it.
Thank you for your reply..I tried enabling both options for authenticated AWS user and tried to view through AWS user but its not working but when I tried enabling the object access for EVERYONE " AWS user " was able to view the file that is been stored by the one who uploaded the file.. How ?
Sorry, but i am not able understand the scenario here well.
ok.. I created a file and enabled object access and permission access for authenticated AWS user, but when I tried to open that file as an AWS user I was not able to open the file but when I enabled object access for EVERYONE as a AWS user I was able to open that file...
Hello, Very nice explanation. I am facing a problem its a simple static website hoisting with s3 bucket for root domain & subdomain. I created2 buckets same as the domain name & sub domain name (vexample.com & www.vexample.com). Created bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::vexample.com/*"
}
]
}
when I click on the endpoint of this bucket it works
but in the bucket www.vexample.com which has been redirected to vexample.com when I click on the endpoint it says sorry we could not find vexample.com(goes to internet provider search). please help me out in sorting this
Thanks
are you doing the redirection at S3 level or Route53 level.? Mail me with screenshots.
Thank you so much for an reply. S3 level
not sure, where are you missing. it normally works.
I wanted to add.txt file format on my bucket but it fails every time. Any help?
What is maunting
How can I list objects from s3 more than 1000 keys
Hi Sir, i am not able to do cross region replication from another account can u please solve it.
Look at the permissions given in Bucket policy. Please refer documentation, examples are there.
Sir i m creating this policy for replication but it give me error that invalid resource name "{
"Version":"2008-10-17",
"Id":"",
"Statement":[
{
"Sid":"Stmt123",
"Effect":"Allow",
"Principal":{
"AWS":"arn:aws:iam::549979776016:root"
},
"Action":["s3:ReplicateObject", "s3:ReplicateDelete"],
"Resource":"arn:aws:s3:::*"
}
]
}" I used bucket name instead of * but result are same.
You have step-by-step details here, please follow - docs.aws.amazon.com/AmazonS3/latest/dev/crr-walkthrough-2.html
See closely, in permission it should be bucket/*
Share the video, if you received help from our channel. thanks
how to setup cmd to use aws commands
Its explained here - ua-cam.com/video/DH47Pu2DQBU/v-deo.html
I would request to look at our playlists for AWS Certifications ---
Solutions Architect - ua-cam.com/video/ywHFXfuJoSU/v-deo.html
&&&
SysOps Administrator - ua-cam.com/video/UFSH-KuDGj8/v-deo.html
++++++++++++++++++++++++++++++++++++++++
I have answered lot of AWS Interview questions in LIVE sessions here -- ua-cam.com/play/PLTyrc6mz8dg_tEexS22k_gmssDmkWkEMd.html
Connect with me on LinkedIn to read interesting AWS updates & Practical Scenario Questions --- www.linkedin.com/in/knowledgeindia
Don't miss any updates, please follow my FB page fb.me/AWStutorials
&
Twitter - twitter.com/#!/knowledge_india
And for AWS exercises & case-studies, you can refer our blog -- aws-tutorials.blogspot.com/
++++++++++++++++++++++++++++++++++++++++
Please share latest video
my i know the server side encryption and client side encryption
Too many commercial breaks interrupting the video.
Sorry for inconvenience.
I keep getting advertisement of Amity for the blockchain....i will say dont spend money on blockchain...it will never succeed..the amity is a stupid institute making everyone a fool
your voice is not clear
But S3 is a global service
Please check the location or region of your bucket
excellent explanation
Thanks a lot. Please support us by sharing the video with your friends on FB / Twitter / LinkedIn..
Thank you
Thanks again. Please support us by sharing this video with your friends on LinkedIn/FB.
sure