Restore S3 object from Glacier to Standard | Hands-On Session | Glacier Deep Archive | S3API AWS CLI

Поділитися
Вставка

КОМЕНТАРІ • 13

  • @beyondthecloud
    @beyondthecloud  9 місяців тому

    Please provide your valuable feedback in the comment section. Please like share and subscribe for more upcoming content

  • @dheerajg8607
    @dheerajg8607 4 місяці тому

    Very good video to explain various storage classes in s3, and how to restore via cli and how to restore via various options..
    Very helpful. Thank you

  • @Indyc0lt
    @Indyc0lt 5 місяців тому

    This helped me understand the process. Thank you!! Subscribed!

  • @pupun813
    @pupun813 9 місяців тому

    Great content !!

  • @aluvalanaveen9459
    @aluvalanaveen9459 9 місяців тому

    Informative! How can restore multiple files using cli? And also the restore object will delete based one days.
    What about the original copy is it available?

    • @beyondthecloud
      @beyondthecloud  9 місяців тому

      To restore multiple objects in AWS S3 using the CLI, you can use a script or a loop to iterate over each object and initiate a restore request for each one. Here's a simple bash script to achieve this:
      ===============================
      #!/bin/bash
      # Set your bucket name
      bucket="YOUR_BUCKET_NAME"
      # Define an array of object keys to restore
      objects=(
      "object_key_1"
      "object_key_2"
      "object_key_3"
      # Add more object keys as needed
      )
      # Set the number of days for restoration
      restore_days=30
      # Loop through each object and initiate restore
      for obj in "${objects[@]}"
      do
      aws s3api restore-object --bucket "$bucket" --key "$obj" --restore-request "{\"Days\":$restore_days}"
      done
      =================================
      Regarding the original copy of the object, when you initiate a restore request, AWS makes a temporary copy of the object available for you to access. After the specified number of days (restore_days), if the object is not accessed, AWS will automatically delete the temporary copy, but the original copy remains intact in the S3 bucket.

  • @ankushjain358
    @ankushjain358 6 місяців тому

    Quite useful

  • @amardesai4150
    @amardesai4150 9 місяців тому

    object is stored in the Intelligent-Tiering Archive Access tier. In order to access it you must first restore it to the Frequent access tier how to do it folder files in bulk

    • @beyondthecloud
      @beyondthecloud  9 місяців тому

      To restore objects stored in the Intelligent-Tiering Archive Access tier to the Frequent Access tier in bulk, you can use the AWS CLI with the aws s3api commands. Here's how you can do it:
      List the objects: You need to list the objects in the S3 bucket that are stored in the Intelligent-Tiering Archive Access tier. You can use the aws s3api list-objects command for this.
      Filter the objects: Filter the list to include only objects in the Archive Access tier.
      Initiate the restore request: For each object in the list, initiate a restore request to move it to the Frequent Access tier.
      Here's a bash script to accomplish this:
      ==================
      #!/bin/bash
      # Set your bucket name
      bucket="YOUR_BUCKET_NAME"
      # List objects in the bucket
      objects=$(aws s3api list-objects --bucket "$bucket" --query "Contents[?StorageClass=='INTELLIGENT_TIERING_ARCHIVE']").
      # Loop through each object and initiate restore
      for obj in $(echo "${objects}" | jq -r '.[].Key'); do
      aws s3api restore-object --bucket "$bucket" --key "$obj" --restore-request '{"Days":1,"GlacierJobParameters":{"Tier":"Standard"}}'
      done
      ================
      Make sure you have jq installed for JSON parsing. This script does the following:
      Lists objects in the specified bucket.
      Filters objects stored in the Intelligent-Tiering Archive Access tier.
      Initiates a restore request for each object, setting the restore period to 1 day and moving the object to the Frequent Access tier.
      Replace YOUR_BUCKET_NAME with your bucket name. Save the script to a file (e.g., restore_intelligent_tiering.sh), make it executable, and then run it.
      This script will iterate through all objects stored in the Intelligent-Tiering Archive Access tier in the specified bucket and initiate a restore request for each one to move it to the Frequent Access tier.