great video! I have two questions: 1. why the table in the redshift is not in the same order as it in the CSV file? 2. why in the ETL job configuration the S3 source type choose the data catalog table instead of S3 location? Does that mean we can complete the incremental data load from s3 to redshift just choosing the S3 location and don't use the crawler? Thanks in advance.
Thank you for watching my videos. 1. It's purely functionality of ETL job which Extract the data from s3 bucket . 2. We can directly use s3 bucket as source when you have single csv file. But when bucket has multiple csv file it's better to use data catalog so that you can map the schema from source to destination.
@@cloudquicklabs regarding point 2, even if I have multiple csv files, but with same schema, I think there should not be any reason to use crawler, since we have fixed schema in redshift.
Thank you for watching my videos. Glad that it helped you. If you mean column of table , then I would say it does not matter untill proper records are getting updated in respective column
Thank you for watching my videos. It's not required, what's important is closing right option while declaring destination in ETL job. The configurations shown should be fair enough here.
Thank you for watching my videos. Glad that it's useful video to you. I have not copied the Glue job code but I have attached relevant files in description please use the same and do follow the these steps it would be working for you.
Few doubts. 1) In real scenario TestReport.csv won’t be updated right? Suppose we have TestReport_1.csv and TestReport_2.csv in particular intervals and in these files we have few old but updated records and few new records. How to handle such situations? Pls redirect me if you have already explained such situations in any of your videos!!
Thank you for watching my videos. I haven't covered these scenarios in these videos. I shall create new version of this video soon where I cover the demo of these scenario.
Thank you for watching my videos. I wanted to show that when you source data gets incremented pipelines ETL pipelines copies only the increamental data not the duplicates.
@@cloudquicklabs Yes, I understand that, but before running the job for the second time, to illustrate that merge works, you run the crawler. Is that needed/recommended?
It's because of if there is any change in the schema to identify it,so we run everytime the crawler before running the job to keep the latest changes from the source
@@cloudquicklabs In this video at 33 minute for to fetch the incremental data the crawler has ran again. Do we really need to run the crawler if there is no schema change but only has the new data?
I want to know why the rows data is come out when you choose data catalog table from s3 source in your glue job because I just saw crawler that only copy metadata from s3 csv to data catalog not the data right? when I try like you in my case , it says no data to display because there is only schema information updated in data catalog table. Please let me know
Thank you for watching my videos. It's default verification from Glue which display some sample data when you are mapping tables. In your case you must missing one step. Please do watch video again do the steps as mentioned in video.
I have two question 1 why you have rsn the crawler twice 2 if there is files continiusly ( different name ) then how can we use glue job to incremental load
Thank you for watching my videos. I ran twice to show the demo of incremental load at second time. And when file names are continuously different you would depend on folder path of files and also note that atheist column should be same and in job you need map it to destination target tables.
Nice video. I have a question I am getting my data from Raspberrypi to s3 and it updates whenever i run in raspberrypi. Now when my job succussed and update that s3 and run the same job I am getting error so whats the process.
Thank you for watching my videos. You should not get any error when you run twice, Did you check logs for what is the error message that you see there.
Hello the data is loaded from s3 to redshift is zig zag manner, we have the data from source like 1,2 3,4 order , but the target is 1,4,11 and so on , how to get serial data to the redshift
Thank you for watching my videos. It should not be the case here , I believe only columns should disordered while row data should be intact. Please watch this video again.
Thank you for watching my videos. I shall create new video on incremental loading from RDS (SQL/Postgres) to Amazon Redhsift. If not We still have work around do it here, I.e first load to S3 bucket and then load to Amzon Redshift.
While test connection in the gule i was encountered few erroes like status logger unrecognised format specifiers like can you help me how to encounter those problems
@@cloudquicklabs Some issue with glueContext.write_dynamic_frame.from_catalog, where as glueContext.write_dynamic_frame.from_jdbc_conf is perfectly working fine.... Getting below error while writing to Redshift catalog table: Error Category: UNCLASSIFIED_ERROR; An error occurred while calling o130.pyWriteDynamicFrame. Exception thrown in awaitResult: SQLException thrown while running COPY query; will attempt to retrieve more information by querying the STL_LOAD_ERRORS table Could you please guide
I have multiple large tables in CSV format in AWS S3, should I: 1. load them from S3 to RDS (mySQL) => do my queries using mySQL Workbench => export the expected data to S3 => sync the expected data to SageMaker for visualization and other analysis using a Notebook instance? 2. load them from S3 to Redshift => doing queries using Redshift, actually I'm not quite sure what to do next in this direction, the goal is to have some filtered data for visualization and analysis. Thank you
Thank you for watching my videos. Is your requirements here is just to Visualize data or do you want to have ML run on them (since you said you want run Sagemaker as it is for ML and costly tool). You could check for Amazon Quicksight if you are looking visualization.
Thank you,@@cloudquicklabs. I actually want to do some ML, but that part can be done on a local machine. The primary goal now is to load data from S3 to something I can query to analyze the data better.
Nice video! can I just use crawler to crawl new folders only and then in glue job, just use bookmark and append the data instead of merging ? if the table is large, merging will be very expensive operation, no?
Thank you for watching my videos. I have not created video on the dynamic folder structure in AWS glue. I shall create it soon. There should not be any extra cost here.
Thank you for watching my videos. I believe you need to use Data Quality rules here to check if your primary keys are in sequence. Ma be you need watch my latest video here ua-cam.com/video/DMQRFwbeYMc/v-deo.htmlsi=mu-t_cNUIvXzHIXv which might give ideas.
so my question is in this video why you not schedule it. it is a incremental data the data is load on s3 any time. so the incremental data is go to schedule it when the data is been upload means the automatically job will run and execute it. so why you default the execute the job run is how many times you run a crawler and run a job..?
Thank you for watching my videos. Indeed, it should be scheduled , I have mentioned it in video. As this is demo I have shown it via manual trigger. I am working on v3 of this video, where I cover missing points.
Thank you for watching my videos. I am using admin permissions as it is demo. Did you check reference documents shared at : github.com/RekhuGopal/PythonHacks/tree/main/AWS_ETL_Increamental_Load_S3_to_RedShift
@@cloudquicklabs Which IAM role given to redshift workgroup? Please create and upload basic video on it. I want to build this project. But don't understand which IAM role given to redshift workgroup?
Thank you for watching my videos. All the resources involved in this solution are costly. Please make use of AWS given calcutors to estimate you're cost here. As this was lab session for me I did set up and cleaned up later once lab is completed and it might have costed me very less.
@@cloudquicklabs I am using dms to transfer data from rsd mysql to s3 Then by using glue I am transfering data from s3 to redshift.. and in glue I am getting error
This is great video!! Thank you for this :) My job got failed with error : Error Category: UNCLASSIFIED_ERROR; Failed Line Number: 20; An error occurred while calling o113.pyWriteDynamicFrame. Exception thrown in awaitResult::
Thank you for watching my videos. Error could be due to many reasons 1. Check if data format is as expected in source side. Due to syntactical error this kind of errors can happen.
Thank you for watching my videos. While defining the crawlers you would give the path, choose right path accordingly, and crawler would scrape all data from all files and folder present underneath the defined folder.
@@cloudquicklabs thanks for the reply. what if my folder structure is like the below. input_bucket/year={current_year}/month={current_month}/day={current_date}/file.txt how can i define my crawler to check the file based on date and load data on incremental basis not as full refresh...any idea ??
great video! I have two questions: 1. why the table in the redshift is not in the same order as it in the CSV file? 2. why in the ETL job configuration the S3 source type choose the data catalog table instead of S3 location? Does that mean we can complete the incremental data load from s3 to redshift just choosing the S3 location and don't use the crawler? Thanks in advance.
Thank you for watching my videos.
1. It's purely functionality of ETL job which Extract the data from s3 bucket .
2. We can directly use s3 bucket as source when you have single csv file. But when bucket has multiple csv file it's better to use data catalog so that you can map the schema from source to destination.
@@cloudquicklabs regarding point 2, even if I have multiple csv files, but with same schema, I think there should not be any reason to use crawler, since we have fixed schema in redshift.
thanks for the valuable inputs. Small query, what if my incremental file is landed on a diff folder on same bucket. How crawler is going to handle.?
Thank you for watching my videos.
Crawler would work (scrapes all data underneath a defined folder)
Great content !
I have a question on this. Can we run Merge operation on an external database as well? like Oracle, Snowflake , etc?
Thank you for watching my videos.
Yes.. we can do this.
I shall create a new video on this topic soon.
@@cloudquicklabs Hi where you able to create this video already? Thanks
Thank you for this video , I was able to load the incremental data
Thank you for watching my videos.
Glad that it helped you.
Thank you...very useful...great video👍 small query is - why the table in the redshift is not in the same order as it in the CSV file?
Thank you for watching my videos.
Glad that it helped you.
If you mean column of table , then I would say it does not matter untill proper records are getting updated in respective column
Can you please explain why I need to run the crawler again if there are no changes to file location or fields / structure of the source (S3 file).
Thank you for watching my videos.
Indeed you don't need to re-run if all files and folders are scraped by crawler already.
This video helped a lot. Is there a way you can add trust relationships during IAM role creation in the repo? Thank you
Thank you for watching my videos.
Did you see a dedicated video here ua-cam.com/video/sw-a8nexTY8/v-deo.html
For incremental load to work do we need to enable job bookmark in glue? Or it's not required?
Thank you for watching my videos.
It's not required, what's important is closing right option while declaring destination in ETL job. The configurations shown should be fair enough here.
Hello please.. Do you hold classes any where, and do you provide project support?
Thank you for watching my videos.
Don't provide any classes.
But I provide project support and project works.
@@cloudquicklabs How can I go about it - getting project help
Much waited video thankyou so much,could you attach the code here which have been generated after glue job confriguration.
Thank you for watching my videos.
Glad that it's useful video to you.
I have not copied the Glue job code but I have attached relevant files in description please use the same and do follow the these steps it would be working for you.
Great video! 🤓
Thank you for watching my videos.
Glad that it helped you.
Few doubts. 1) In real scenario TestReport.csv won’t be updated right? Suppose we have TestReport_1.csv and TestReport_2.csv in particular intervals and in these files we have few old but updated records and few new records. How to handle such situations?
Pls redirect me if you have already explained such situations in any of your videos!!
Thank you for watching my videos.
I haven't covered these scenarios in these videos.
I shall create new version of this video soon where I cover the demo of these scenario.
This is what I wanted.... Thank u so much 🙏
Thank you for watching my videos.
Glad that it helped you.
Why did you run the crawler after the Glue job finished and before running it for the second time? is that required in order to MERGE to succeed?
Thank you for watching my videos.
I wanted to show that when you source data gets incremented pipelines ETL pipelines copies only the increamental data not the duplicates.
@@cloudquicklabs Yes, I understand that, but before running the job for the second time, to illustrate that merge works, you run the crawler. Is that needed/recommended?
It's because of if there is any change in the schema to identify it,so we run everytime the crawler before running the job to keep the latest changes from the source
Indeed it's needed to load the data from the file. Note it's same file in same path.
But it is data change not schema change ... We can verify by running Athena query on catalog table .... Anyways nice use case explanation
Do we need to run crawler every time when ever there is a new data that comes into S3 ?
Thank you for watching my videos.
You mean to add lambda trigger when objects created in s3 bucket or just schedule the lambda.
@@cloudquicklabs In this video at 33 minute for to fetch the incremental data the crawler has ran again. Do we really need to run the crawler if there is no schema change but only has the new data?
Indeed I run it for second to fetch data from source but update only incremental data at destination side.
I want to know why the rows data is come out when you choose data catalog table from s3 source in your glue job because I just saw crawler that only copy metadata from s3 csv to data catalog not the data right?
when I try like you in my case , it says no data to display because there is only schema information updated in data catalog table. Please let me know
Thank you for watching my videos.
It's default verification from Glue which display some sample data when you are mapping tables.
In your case you must missing one step.
Please do watch video again do the steps as mentioned in video.
Incase of SCD, how we can define the SCD logic for merge and load? which option will have this option?
Thank you for watching my videos.
In terms of SCD , I believe the process would remain same untill the schema of table remains same.
I have two question
1 why you have rsn the crawler twice
2 if there is files continiusly ( different name ) then how can we use glue job to incremental load
Thank you for watching my videos.
I ran twice to show the demo of incremental load at second time. And when file names are continuously different you would depend on folder path of files and also note that atheist column should be same and in job you need map it to destination target tables.
Amazing content!!!
When im trying to read data from s3 on aws glue it is giving me error.
Can u please guide
Thank you for watching my videos.
Could you please tell me what is the error that you are getting.
What if primary key is not in incremental or decreasing order in the data. will incremental data load work?
Thank you for watching my videos.
Indeed it should load as Primary key is unique in tables and records are identified by that.
Thank you so much I got revision and good idea
Thank you for watching my videos.
Glad that it helped you.
Nice video. I have a question I am getting my data from Raspberrypi to s3 and it updates whenever i run in raspberrypi. Now when my job succussed and update that s3 and run the same job I am getting error so whats the process.
Thank you for watching my videos.
You should not get any error when you run twice, Did you check logs for what is the error message that you see there.
@@cloudquicklabs How to check logs of it, In cloud watch loggroups as i see my live tail it is saying that it is 100%displayed . How to resolve it?
Hello the data is loaded from s3 to redshift is zig zag manner, we have the data from source like 1,2 3,4 order , but the target is 1,4,11 and so on , how to get serial data to the redshift
Thank you for watching my videos.
It should not be the case here , I believe only columns should disordered while row data should be intact.
Please watch this video again.
Could you please make a video how to make fact & dimension table in redshift it would be great if you make that video too s3 to redshift using glue
Thank you for watching my videos.
I shall work on this concept and create a video.
Is Incremental loading available for SQl Server and Postgre as target?
Thank you for watching my videos.
I shall create new video on incremental loading from RDS (SQL/Postgres) to Amazon Redhsift.
If not
We still have work around do it here, I.e first load to S3 bucket and then load to Amzon Redshift.
@@cloudquicklabs She is referring RDS as target from S3.
While test connection in the gule i was encountered few erroes like status logger unrecognised format specifiers like can you help me how to encounter those problems
Thank you for watching my videos .
Have you followed the video direction as expected.
Im not getting Endpoint JDBC URL ODBC URL in worker group, how can we get them ?
Thank you for watching my videos.
Please check the workspace it should be there.
Very useful.. thanks... subscribed now for more interesting content
Thank you for watching my videos.
Happy learning.
@@cloudquicklabs Some issue with glueContext.write_dynamic_frame.from_catalog, where as glueContext.write_dynamic_frame.from_jdbc_conf is perfectly working fine....
Getting below error while writing to Redshift catalog table:
Error Category: UNCLASSIFIED_ERROR; An error occurred while calling o130.pyWriteDynamicFrame. Exception thrown in awaitResult:
SQLException thrown while running COPY query; will attempt to retrieve more information by querying the STL_LOAD_ERRORS table
Could you please guide
I have multiple large tables in CSV format in AWS S3, should I:
1. load them from S3 to RDS (mySQL) => do my queries using mySQL Workbench => export the expected data to S3 => sync the expected data to SageMaker for visualization and other analysis using a Notebook instance?
2. load them from S3 to Redshift => doing queries using Redshift, actually I'm not quite sure what to do next in this direction, the goal is to have some filtered data for visualization and analysis.
Thank you
Thank you for watching my videos.
Is your requirements here is just to Visualize data or do you want to have ML run on them (since you said you want run Sagemaker as it is for ML and costly tool).
You could check for Amazon Quicksight if you are looking visualization.
Thank you,@@cloudquicklabs. I actually want to do some ML, but that part can be done on a local machine. The primary goal now is to load data from S3 to something I can query to analyze the data better.
@@hoangng16 once you catalog it you can access it from Athena as well.
Easy and smooth.
Thank you for watching my videos.
Glad that it helped you.
can you make oracledb(source with cdc) -->S3 --> Glue(some Transformation) --> Redshift warehouse whole usecase please
Thank you for watching my videos.
Indeed , I shall add this to my to do list and make a video on it.
Yes I am also waiting this type of project with ecommers data
Hi, I'm getting invalid input exception error, can you pls resolve?
Thank you for watching my videos.
Could you please watch the video again.
Nice video! can I just use crawler to crawl new folders only and then in glue job, just use bookmark and append the data instead of merging ? if the table is large, merging will be very expensive operation, no?
Thank you for watching my videos.
I have not created video on the dynamic folder structure in AWS glue. I shall create it soon. There should not be any extra cost here.
How do we ensure Primary Key sequence is intact with Source.
Thank you for watching my videos.
I believe you need to use Data Quality rules here to check if your primary keys are in sequence. Ma be you need watch my latest video here ua-cam.com/video/DMQRFwbeYMc/v-deo.htmlsi=mu-t_cNUIvXzHIXv which might give ideas.
Jai Sevalal Bhai
Thank you for watching my videos.
Glad that it helped you.
Jai Sevalal.!
so my question is in this video why you not schedule it. it is a incremental data the data is load on s3 any time. so the incremental data is go to schedule it when the data is been upload means the automatically job will run and execute it. so why you default the execute the job run is how many times you run a crawler and run a job..?
Thank you for watching my videos.
Indeed, it should be scheduled , I have mentioned it in video.
As this is demo I have shown it via manual trigger.
I am working on v3 of this video, where I cover missing points.
@@cloudquicklabs kk is this uploded
Not yet I am still working on it.
It should automatically trigger based on s3 event
Thank you for watching my videos.
I shall create new version of this video where this would be considered.
Tried the Redshift test connection but it failed due to this error
ERROR StatusLogger Unrecognized format specifier [d]
Thank you for watching my videos.
Are you getting error while creating connection or while execution of ETL pipeline.
While testing connection to redshift
which iam role given to redshift workgroup?
Thank you for watching my videos.
I am using admin permissions as it is demo.
Did you check reference documents shared at : github.com/RekhuGopal/PythonHacks/tree/main/AWS_ETL_Increamental_Load_S3_to_RedShift
@@cloudquicklabs
Which IAM role given to redshift workgroup?
Please create and upload basic video on it.
I want to build this project. But don't understand which IAM role given to redshift workgroup?
How much cost is expected if I try this with my personal aws account
Thank you for watching my videos.
All the resources involved in this solution are costly. Please make use of AWS given calcutors to estimate you're cost here. As this was lab session for me I did set up and cleaned up later once lab is completed and it might have costed me very less.
How to load incremental data by using python script.
Thank you for watching my videos.
I shall do one vidoe using Python scripts on #ETL pipeline.
Target node is not supported
What to do
Did you follow the video here ?
@@cloudquicklabs yes
@@cloudquicklabs I am using dms to transfer data from rsd mysql to s3
Then by using glue I am transfering data from s3 to redshift.. and in glue I am getting error
It is extremely useless!
Thank you for watching my videos.
Glad that it helped you.
This is great video!! Thank you for this :)
My job got failed with error : Error Category: UNCLASSIFIED_ERROR; Failed Line Number: 20; An error occurred while calling o113.pyWriteDynamicFrame. Exception thrown in awaitResult::
Thank you for watching my videos.
Error could be due to many reasons 1. Check if data format is as expected in source side. Due to syntactical error this kind of errors can happen.
How to contact yoy
You could reach me over email vrchinnarathod@gmail.com
thanks for the valuable inputs. Small query, what if my incremental file is landed on a diff folder on same bucket. How crawler is going to handle.?
Thank you for watching my videos.
While defining the crawlers you would give the path, choose right path accordingly, and crawler would scrape all data from all files and folder present underneath the defined folder.
@@cloudquicklabs thanks for the reply. what if my folder structure is like the below.
input_bucket/year={current_year}/month={current_month}/day={current_date}/file.txt
how can i define my crawler to check the file based on date and load data on incremental basis not as full refresh...any idea ??