A perfect explanation with appropriate example and use case . So anyone who is new to the concept can easily understand after watching the video: 1. what it is 2. Why it is used and what is the need of it 3 And how to do it end to end.. So a perfect video i would say covering everything
Thank you so much for your positive feedback @deepaksingh9318! I'm glad to hear that the explanation resonated well with you, and you found it comprehensive and helpful.
can you tell the all things you performed and used are free to use? i mean if i make a same as yours or make an integration of s3, sqs and lambda, aws would not apply charge na? and can you provide all the codes and steps in a docx format?
SQS has a message size limitation, and it's recommended to keep messages as small as possible. Including the actual content of a large file in an SQS message could potentially lead to exceeding these limitations. Moreover, SQS is more efficient when used to transmit metadata or information necessary to trigger subsequent actions.
@@Polly10189 from the code explained in the video , you can get the s3 bucket name & key name , now you can use any python module like boto3 or s3fs to read the data from s3 and perform various computation. For example , if you want to read the csv data from s3 , then here is the code -- s3 = boto3.client( 's3', aws_access_key_id='XYZACCESSKEY', aws_secret_access_key='XYZSECRETKEY', region_name='us-east-1' ) obj = s3.get_object(Bucket='bucket-name', Key='myreadcsvfile.csv') data = obj['Body'].read().decode('utf-8').splitlines() records = csv.reader(data) headers = next(records) print('headers: %s' % (headers)) for eachRecord in records: print(eachRecord) Like this way for different file format , you can create the code and read from s3 ...
What is the efficient and cost effective way of moving Messages from SQS Q to S3 bucket I have a Lambda function that is processing messages from SQS Q and deletes them once processing is done. I need to persist SQS messages in S3 for compliance. Thank you.
Hello Ravi K Reddy, json is available by default in AWS Lambda execution environment , so no need deployment zip or lambda layer to use json, you can find the list of available modules in Lambda Eexecution environment for different Python versions here -- gist.github.com/gene1wood/4a052f39490fae00e0c3 Happy Learning
Great explanation... Is there anyway instead of SQS we can have AWS eventbridge through which we can trigger the lamda . 2. Also can you provide any python or pspark script through which we can load the CSV file to snowflake db
A perfect explanation with appropriate example and use case .
So anyone who is new to the concept can easily understand after watching the video:
1. what it is
2. Why it is used and what is the need of it
3 And how to do it end to end..
So a perfect video i would say covering everything
Thank you so much for your positive feedback @deepaksingh9318! I'm glad to hear that the explanation resonated well with you, and you found it comprehensive and helpful.
Thanks for the code, was struggling to read the event to extract bucket name and the object...this made my life easy
Glad to hear the video is helpful to you S K! Happy Learning
Also, without adding SQS trigger to the lambda how did it detect the s3 file uploads from sqs trigger as seen in the cloudwatch logs?
@likitan4076, I have added the trigger at 9:27 ..
@@KnowledgeAmplifier1 To the SQS you added a lambda trigger..got it.. iwas thinking adding an sqs trigger to the lambda function
can you tell the all things you performed and used are free to use? i mean if i make a same as yours or make an integration of s3, sqs and lambda, aws would not apply charge na? and can you provide all the codes and steps in a docx format?
simple & straight forward
is it possible to get the uploaded file content as well in the SQS message anyhow?
SQS has a message size limitation, and it's recommended to keep messages as small as possible. Including the actual content of a large file in an SQS message could potentially lead to exceeding these limitations. Moreover, SQS is more efficient when used to transmit metadata or information necessary to trigger subsequent actions.
@@KnowledgeAmplifier1 Thanks for your reply. I need to get the actual data of uploaded file, can we do this by using any AWS service?
@@Polly10189 from the code explained in the video , you can get the s3 bucket name & key name , now you can use any python module like boto3 or s3fs to read the data from s3 and perform various computation.
For example , if you want to read the csv data from s3 , then here is the code --
s3 = boto3.client(
's3',
aws_access_key_id='XYZACCESSKEY',
aws_secret_access_key='XYZSECRETKEY',
region_name='us-east-1'
)
obj = s3.get_object(Bucket='bucket-name', Key='myreadcsvfile.csv')
data = obj['Body'].read().decode('utf-8').splitlines()
records = csv.reader(data)
headers = next(records)
print('headers: %s' % (headers))
for eachRecord in records:
print(eachRecord)
Like this way for different file format , you can create the code and read from s3 ...
@@KnowledgeAmplifier1 I am reading the path of file uploaded to S3. It's working, Thanks
@@Polly10189 Glad to hear this! Happy Learning
What is the efficient and cost effective way of moving Messages from SQS Q to S3 bucket
I have a Lambda function that is processing messages from SQS Q and deletes them once processing is done. I need to persist SQS messages in S3 for compliance. Thank you.
Quick question: Don't we have to upload deployment zip with json package in it? how does lambda install that library?
Hello Ravi K Reddy, json is available by default in AWS Lambda execution environment , so no need deployment zip or lambda layer to use json, you can find the list of available modules in Lambda Eexecution environment for different Python versions here -- gist.github.com/gene1wood/4a052f39490fae00e0c3 Happy Learning
Hi. I have few doubts on kafka. can you please explain?
Hello KS Prem Kumar, please share your doubt here , if I know that topic , I will surely try to help as much as possible..
Great explanation... Is there anyway instead of SQS we can have AWS eventbridge through which we can trigger the lamda .
2. Also can you provide any python or pspark script through which we can load the CSV file to snowflake db
Hello Sir, Could you pls make a video to integrate Application using Amazon SQS .
How to reach out for more information...can I get contact details pls?
Thanks a lot!
You are welcome Sergei Sizov! Happy Learning