ETL | Incremental Data Load from Amazon S3 Bucket to Amazon Redshift Using AWS Glue | Datawarehouse

Поділитися
Вставка

КОМЕНТАРІ • 115

  • @zzzzzzzmr9759
    @zzzzzzzmr9759 9 місяців тому +4

    great video! I have two questions: 1. why the table in the redshift is not in the same order as it in the CSV file? 2. why in the ETL job configuration the S3 source type choose the data catalog table instead of S3 location? Does that mean we can complete the incremental data load from s3 to redshift just choosing the S3 location and don't use the crawler? Thanks in advance.

    • @cloudquicklabs
      @cloudquicklabs  9 місяців тому +1

      Thank you for watching my videos.
      1. It's purely functionality of ETL job which Extract the data from s3 bucket .
      2. We can directly use s3 bucket as source when you have single csv file. But when bucket has multiple csv file it's better to use data catalog so that you can map the schema from source to destination.

    • @thelifehackerpro9943
      @thelifehackerpro9943 2 місяці тому

      @@cloudquicklabs regarding point 2, even if I have multiple csv files, but with same schema, I think there should not be any reason to use crawler, since we have fixed schema in redshift.

  • @saikrishna8338
    @saikrishna8338 7 місяців тому +2

    thanks for the valuable inputs. Small query, what if my incremental file is landed on a diff folder on same bucket. How crawler is going to handle.?

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому +1

      Thank you for watching my videos.
      Crawler would work (scrapes all data underneath a defined folder)

  • @awaise92
    @awaise92 7 місяців тому +2

    Great content !
    I have a question on this. Can we run Merge operation on an external database as well? like Oracle, Snowflake , etc?

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому +1

      Thank you for watching my videos.
      Yes.. we can do this.
      I shall create a new video on this topic soon.

    • @jeffrey6124
      @jeffrey6124 15 днів тому

      @@cloudquicklabs Hi where you able to create this video already? Thanks

  • @prabhajayashetty2297
    @prabhajayashetty2297 26 днів тому +1

    Thank you for this video , I was able to load the incremental data

    • @cloudquicklabs
      @cloudquicklabs  26 днів тому

      Thank you for watching my videos.
      Glad that it helped you.

  • @tejpatta3641
    @tejpatta3641 6 місяців тому +1

    Thank you...very useful...great video👍 small query is - why the table in the redshift is not in the same order as it in the CSV file?

    • @cloudquicklabs
      @cloudquicklabs  6 місяців тому

      Thank you for watching my videos.
      Glad that it helped you.
      If you mean column of table , then I would say it does not matter untill proper records are getting updated in respective column

  • @rahulsood81
    @rahulsood81 7 місяців тому +2

    Can you please explain why I need to run the crawler again if there are no changes to file location or fields / structure of the source (S3 file).

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому

      Thank you for watching my videos.
      Indeed you don't need to re-run if all files and folders are scraped by crawler already.

  • @SriHarikakaruturi-d4k
    @SriHarikakaruturi-d4k 10 місяців тому +1

    This video helped a lot. Is there a way you can add trust relationships during IAM role creation in the repo? Thank you

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      Did you see a dedicated video here ua-cam.com/video/sw-a8nexTY8/v-deo.html

  • @jnana1985
    @jnana1985 7 місяців тому +1

    For incremental load to work do we need to enable job bookmark in glue? Or it's not required?

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому +1

      Thank you for watching my videos.
      It's not required, what's important is closing right option while declaring destination in ETL job. The configurations shown should be fair enough here.

  • @KwakuBoateng-l1f
    @KwakuBoateng-l1f 9 місяців тому +1

    Hello please.. Do you hold classes any where, and do you provide project support?

    • @cloudquicklabs
      @cloudquicklabs  9 місяців тому +1

      Thank you for watching my videos.
      Don't provide any classes.
      But I provide project support and project works.

    • @KwakuBoateng-l1f
      @KwakuBoateng-l1f 9 місяців тому

      @@cloudquicklabs How can I go about it - getting project help

  • @nitikjain993
    @nitikjain993 10 місяців тому +1

    Much waited video thankyou so much,could you attach the code here which have been generated after glue job confriguration.

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      Glad that it's useful video to you.
      I have not copied the Glue job code but I have attached relevant files in description please use the same and do follow the these steps it would be working for you.

  • @jeffrey6124
    @jeffrey6124 15 днів тому +1

    Great video! 🤓

    • @cloudquicklabs
      @cloudquicklabs  14 днів тому +1

      Thank you for watching my videos.
      Glad that it helped you.

  • @KapilKumar-hk9xk
    @KapilKumar-hk9xk 23 дні тому +1

    Few doubts. 1) In real scenario TestReport.csv won’t be updated right? Suppose we have TestReport_1.csv and TestReport_2.csv in particular intervals and in these files we have few old but updated records and few new records. How to handle such situations?
    Pls redirect me if you have already explained such situations in any of your videos!!

    • @cloudquicklabs
      @cloudquicklabs  22 дні тому +1

      Thank you for watching my videos.
      I haven't covered these scenarios in these videos.
      I shall create new version of this video soon where I cover the demo of these scenario.

  • @akshaymuktiramrodge3233
    @akshaymuktiramrodge3233 11 місяців тому +1

    This is what I wanted.... Thank u so much 🙏

    • @cloudquicklabs
      @cloudquicklabs  11 місяців тому

      Thank you for watching my videos.
      Glad that it helped you.

  • @nlopedebarrios
    @nlopedebarrios 10 місяців тому +1

    Why did you run the crawler after the Glue job finished and before running it for the second time? is that required in order to MERGE to succeed?

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      I wanted to show that when you source data gets incremented pipelines ETL pipelines copies only the increamental data not the duplicates.

    • @nlopedebarrios
      @nlopedebarrios 10 місяців тому +1

      @@cloudquicklabs Yes, I understand that, but before running the job for the second time, to illustrate that merge works, you run the crawler. Is that needed/recommended?

    • @basavarajpn4801
      @basavarajpn4801 10 місяців тому +1

      It's because of if there is any change in the schema to identify it,so we run everytime the crawler before running the job to keep the latest changes from the source

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Indeed it's needed to load the data from the file. Note it's same file in same path.

    • @gauravrai4398
      @gauravrai4398 8 місяців тому

      But it is data change not schema change ... We can verify by running Athena query on catalog table .... Anyways nice use case explanation

  • @jaideep1222
    @jaideep1222 5 місяців тому +1

    Do we need to run crawler every time when ever there is a new data that comes into S3 ?

    • @cloudquicklabs
      @cloudquicklabs  5 місяців тому

      Thank you for watching my videos.
      You mean to add lambda trigger when objects created in s3 bucket or just schedule the lambda.

    • @jaideep1222
      @jaideep1222 5 місяців тому +1

      @@cloudquicklabs In this video at 33 minute for to fetch the incremental data the crawler has ran again. Do we really need to run the crawler if there is no schema change but only has the new data?

    • @cloudquicklabs
      @cloudquicklabs  5 місяців тому

      Indeed I run it for second to fetch data from source but update only incremental data at destination side.

  • @thihakyaw6189
    @thihakyaw6189 10 місяців тому +1

    I want to know why the rows data is come out when you choose data catalog table from s3 source in your glue job because I just saw crawler that only copy metadata from s3 csv to data catalog not the data right?
    when I try like you in my case , it says no data to display because there is only schema information updated in data catalog table. Please let me know

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      It's default verification from Glue which display some sample data when you are mapping tables.
      In your case you must missing one step.
      Please do watch video again do the steps as mentioned in video.

  • @snakhil90
    @snakhil90 5 місяців тому +1

    Incase of SCD, how we can define the SCD logic for merge and load? which option will have this option?

    • @cloudquicklabs
      @cloudquicklabs  5 місяців тому

      Thank you for watching my videos.
      In terms of SCD , I believe the process would remain same untill the schema of table remains same.

  • @RajYadav-eb6pp
    @RajYadav-eb6pp 5 місяців тому +1

    I have two question
    1 why you have rsn the crawler twice
    2 if there is files continiusly ( different name ) then how can we use glue job to incremental load

    • @cloudquicklabs
      @cloudquicklabs  5 місяців тому

      Thank you for watching my videos.
      I ran twice to show the demo of incremental load at second time. And when file names are continuously different you would depend on folder path of files and also note that atheist column should be same and in job you need map it to destination target tables.

  • @alphamaryfrancis862
    @alphamaryfrancis862 9 місяців тому +1

    Amazing content!!!
    When im trying to read data from s3 on aws glue it is giving me error.
    Can u please guide

    • @cloudquicklabs
      @cloudquicklabs  9 місяців тому

      Thank you for watching my videos.
      Could you please tell me what is the error that you are getting.

  • @kakaji_cloud8106
    @kakaji_cloud8106 7 місяців тому +1

    What if primary key is not in incremental or decreasing order in the data. will incremental data load work?

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому

      Thank you for watching my videos.
      Indeed it should load as Primary key is unique in tables and records are identified by that.

  • @kalyanishende618
    @kalyanishende618 10 місяців тому +1

    Thank you so much I got revision and good idea

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      Glad that it helped you.

  • @archanvyas4891
    @archanvyas4891 5 місяців тому +1

    Nice video. I have a question I am getting my data from Raspberrypi to s3 and it updates whenever i run in raspberrypi. Now when my job succussed and update that s3 and run the same job I am getting error so whats the process.

    • @cloudquicklabs
      @cloudquicklabs  5 місяців тому

      Thank you for watching my videos.
      You should not get any error when you run twice, Did you check logs for what is the error message that you see there.

    • @archanvyas4891
      @archanvyas4891 5 місяців тому

      @@cloudquicklabs How to check logs of it, In cloud watch loggroups as i see my live tail it is saying that it is 100%displayed . How to resolve it?

  • @sapramprasannakumar8616
    @sapramprasannakumar8616 4 місяці тому +1

    Hello the data is loaded from s3 to redshift is zig zag manner, we have the data from source like 1,2 3,4 order , but the target is 1,4,11 and so on , how to get serial data to the redshift

    • @cloudquicklabs
      @cloudquicklabs  4 місяці тому

      Thank you for watching my videos.
      It should not be the case here , I believe only columns should disordered while row data should be intact.
      Please watch this video again.

  • @nitikjain993
    @nitikjain993 10 місяців тому +2

    Could you please make a video how to make fact & dimension table in redshift it would be great if you make that video too s3 to redshift using glue

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      I shall work on this concept and create a video.

  • @lipi1004
    @lipi1004 11 місяців тому +1

    Is Incremental loading available for SQl Server and Postgre as target?

    • @cloudquicklabs
      @cloudquicklabs  11 місяців тому +1

      Thank you for watching my videos.
      I shall create new video on incremental loading from RDS (SQL/Postgres) to Amazon Redhsift.
      If not
      We still have work around do it here, I.e first load to S3 bucket and then load to Amzon Redshift.

    • @rahulpanda9256
      @rahulpanda9256 9 місяців тому

      @@cloudquicklabs She is referring RDS as target from S3.

  • @kovirisiva3567
    @kovirisiva3567 Місяць тому +1

    While test connection in the gule i was encountered few erroes like status logger unrecognised format specifiers like can you help me how to encounter those problems

    • @cloudquicklabs
      @cloudquicklabs  Місяць тому

      Thank you for watching my videos .
      Have you followed the video direction as expected.

  • @satheeshambilaeti1894
    @satheeshambilaeti1894 Місяць тому +1

    Im not getting Endpoint JDBC URL ODBC URL in worker group, how can we get them ?

    • @cloudquicklabs
      @cloudquicklabs  Місяць тому

      Thank you for watching my videos.
      Please check the workspace it should be there.

  • @prabhathkota107
    @prabhathkota107 9 місяців тому +1

    Very useful.. thanks... subscribed now for more interesting content

    • @cloudquicklabs
      @cloudquicklabs  9 місяців тому

      Thank you for watching my videos.
      Happy learning.

    • @prabhathkota107
      @prabhathkota107 6 місяців тому

      @@cloudquicklabs Some issue with glueContext.write_dynamic_frame.from_catalog, where as glueContext.write_dynamic_frame.from_jdbc_conf is perfectly working fine....
      Getting below error while writing to Redshift catalog table:
      Error Category: UNCLASSIFIED_ERROR; An error occurred while calling o130.pyWriteDynamicFrame. Exception thrown in awaitResult:
      SQLException thrown while running COPY query; will attempt to retrieve more information by querying the STL_LOAD_ERRORS table
      Could you please guide

  • @hoangng16
    @hoangng16 10 місяців тому +1

    I have multiple large tables in CSV format in AWS S3, should I:
    1. load them from S3 to RDS (mySQL) => do my queries using mySQL Workbench => export the expected data to S3 => sync the expected data to SageMaker for visualization and other analysis using a Notebook instance?
    2. load them from S3 to Redshift => doing queries using Redshift, actually I'm not quite sure what to do next in this direction, the goal is to have some filtered data for visualization and analysis.
    Thank you

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      Is your requirements here is just to Visualize data or do you want to have ML run on them (since you said you want run Sagemaker as it is for ML and costly tool).
      You could check for Amazon Quicksight if you are looking visualization.

    • @hoangng16
      @hoangng16 10 місяців тому

      Thank you,@@cloudquicklabs. I actually want to do some ML, but that part can be done on a local machine. The primary goal now is to load data from S3 to something I can query to analyze the data better.

    • @SidharthanPV
      @SidharthanPV 2 місяці тому

      @@hoangng16 once you catalog it you can access it from Athena as well.

  • @shakthimaan007
    @shakthimaan007 3 місяці тому +1

    Easy and smooth.

    • @cloudquicklabs
      @cloudquicklabs  3 місяці тому

      Thank you for watching my videos.
      Glad that it helped you.

  • @mahir14_
    @mahir14_ 6 місяців тому +1

    can you make oracledb(source with cdc) -->S3 --> Glue(some Transformation) --> Redshift warehouse whole usecase please

    • @cloudquicklabs
      @cloudquicklabs  6 місяців тому

      Thank you for watching my videos.
      Indeed , I shall add this to my to do list and make a video on it.

    • @ranidalvi1064
      @ranidalvi1064 6 місяців тому +1

      Yes I am also waiting this type of project with ecommers data

  • @RupeshKumar-kw7zw
    @RupeshKumar-kw7zw 11 місяців тому +1

    Hi, I'm getting invalid input exception error, can you pls resolve?

    • @cloudquicklabs
      @cloudquicklabs  11 місяців тому

      Thank you for watching my videos.
      Could you please watch the video again.

  • @liubrian6843
    @liubrian6843 Місяць тому +1

    Nice video! can I just use crawler to crawl new folders only and then in glue job, just use bookmark and append the data instead of merging ? if the table is large, merging will be very expensive operation, no?

    • @cloudquicklabs
      @cloudquicklabs  Місяць тому

      Thank you for watching my videos.
      I have not created video on the dynamic folder structure in AWS glue. I shall create it soon. There should not be any extra cost here.

  • @rahulpanda9256
    @rahulpanda9256 9 місяців тому +1

    How do we ensure Primary Key sequence is intact with Source.

    • @cloudquicklabs
      @cloudquicklabs  9 місяців тому

      Thank you for watching my videos.
      I believe you need to use Data Quality rules here to check if your primary keys are in sequence. Ma be you need watch my latest video here ua-cam.com/video/DMQRFwbeYMc/v-deo.htmlsi=mu-t_cNUIvXzHIXv which might give ideas.

  • @cromagaming4694
    @cromagaming4694 13 днів тому +1

    Jai Sevalal Bhai

    • @cloudquicklabs
      @cloudquicklabs  12 днів тому +1

      Thank you for watching my videos.
      Glad that it helped you.
      Jai Sevalal.!

  • @GaneshBabu-vr2lg
    @GaneshBabu-vr2lg 5 місяців тому +1

    so my question is in this video why you not schedule it. it is a incremental data the data is load on s3 any time. so the incremental data is go to schedule it when the data is been upload means the automatically job will run and execute it. so why you default the execute the job run is how many times you run a crawler and run a job..?

    • @cloudquicklabs
      @cloudquicklabs  5 місяців тому

      Thank you for watching my videos.
      Indeed, it should be scheduled , I have mentioned it in video.
      As this is demo I have shown it via manual trigger.
      I am working on v3 of this video, where I cover missing points.

    • @GaneshBabu-vr2lg
      @GaneshBabu-vr2lg 5 місяців тому +1

      @@cloudquicklabs kk is this uploded

    • @cloudquicklabs
      @cloudquicklabs  5 місяців тому +1

      Not yet I am still working on it.

  • @thelifehackerpro9943
    @thelifehackerpro9943 3 місяці тому +1

    It should automatically trigger based on s3 event

    • @cloudquicklabs
      @cloudquicklabs  3 місяці тому

      Thank you for watching my videos.
      I shall create new version of this video where this would be considered.

  • @supriyakulkarni8378
    @supriyakulkarni8378 8 місяців тому +1

    Tried the Redshift test connection but it failed due to this error
    ERROR StatusLogger Unrecognized format specifier [d]

    • @cloudquicklabs
      @cloudquicklabs  8 місяців тому

      Thank you for watching my videos.
      Are you getting error while creating connection or while execution of ETL pipeline.

    • @supriyakulkarni8378
      @supriyakulkarni8378 8 місяців тому

      While testing connection to redshift

  • @yugalbawankar9039
    @yugalbawankar9039 11 місяців тому +1

    which iam role given to redshift workgroup?

    • @cloudquicklabs
      @cloudquicklabs  11 місяців тому

      Thank you for watching my videos.
      I am using admin permissions as it is demo.
      Did you check reference documents shared at : github.com/RekhuGopal/PythonHacks/tree/main/AWS_ETL_Increamental_Load_S3_to_RedShift

    • @yugalbawankar9039
      @yugalbawankar9039 11 місяців тому

      @@cloudquicklabs
      Which IAM role given to redshift workgroup?
      Please create and upload basic video on it.
      I want to build this project. But don't understand which IAM role given to redshift workgroup?

    • @kalyanishende618
      @kalyanishende618 10 місяців тому +1

      How much cost is expected if I try this with my personal aws account

    • @cloudquicklabs
      @cloudquicklabs  10 місяців тому

      Thank you for watching my videos.
      All the resources involved in this solution are costly. Please make use of AWS given calcutors to estimate you're cost here. As this was lab session for me I did set up and cleaned up later once lab is completed and it might have costed me very less.

  • @AliMirfaisal
    @AliMirfaisal 7 місяців тому +1

    How to load incremental data by using python script.

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому

      Thank you for watching my videos.
      I shall do one vidoe using Python scripts on #ETL pipeline.

  • @swapnil_jadhav1
    @swapnil_jadhav1 7 місяців тому +1

    Target node is not supported
    What to do

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому

      Did you follow the video here ?

    • @swapnil_jadhav1
      @swapnil_jadhav1 7 місяців тому

      @@cloudquicklabs yes

    • @swapnil_jadhav1
      @swapnil_jadhav1 7 місяців тому

      @@cloudquicklabs I am using dms to transfer data from rsd mysql to s3
      Then by using glue I am transfering data from s3 to redshift.. and in glue I am getting error

  • @InnoCoreAnalyticsInc
    @InnoCoreAnalyticsInc 2 місяці тому +1

    It is extremely useless!

    • @cloudquicklabs
      @cloudquicklabs  2 місяці тому

      Thank you for watching my videos.
      Glad that it helped you.

  • @prabhajayashetty2297
    @prabhajayashetty2297 Місяць тому +1

    This is great video!! Thank you for this :)
    My job got failed with error : Error Category: UNCLASSIFIED_ERROR; Failed Line Number: 20; An error occurred while calling o113.pyWriteDynamicFrame. Exception thrown in awaitResult::

    • @cloudquicklabs
      @cloudquicklabs  Місяць тому

      Thank you for watching my videos.
      Error could be due to many reasons 1. Check if data format is as expected in source side. Due to syntactical error this kind of errors can happen.

  • @AliMirfaisal
    @AliMirfaisal 6 місяців тому +1

    How to contact yoy

    • @cloudquicklabs
      @cloudquicklabs  6 місяців тому

      You could reach me over email vrchinnarathod@gmail.com

  • @saikrishna8338
    @saikrishna8338 7 місяців тому +1

    thanks for the valuable inputs. Small query, what if my incremental file is landed on a diff folder on same bucket. How crawler is going to handle.?

    • @cloudquicklabs
      @cloudquicklabs  7 місяців тому

      Thank you for watching my videos.
      While defining the crawlers you would give the path, choose right path accordingly, and crawler would scrape all data from all files and folder present underneath the defined folder.

    • @saikrishna8338
      @saikrishna8338 7 місяців тому

      @@cloudquicklabs thanks for the reply. what if my folder structure is like the below.
      input_bucket/year={current_year}/month={current_month}/day={current_date}/file.txt
      how can i define my crawler to check the file based on date and load data on incremental basis not as full refresh...any idea ??