AWS Tutorials - Building ETL Pipeline using AWS Glue and Step Functions

Поділитися
Вставка
  • Опубліковано 30 жов 2021
  • The script URL - github.com/aws-dojo/analytics...
    In AWS, ETL pipelines can be built using AWS Glue Job and Glue Crawler. AWS Glue Jobs are responsible for data transformation while Crawlers are responsible for data catalog. Amazon Step Functions is one approach to create such pipelines. In this tutorial, learn how to use Step Functions build ETL pipeline in AWS.
  • Наука та технологія

КОМЕНТАРІ • 76

  • @arunr2265
    @arunr2265 2 роки тому +19

    your channel is gold for data engineers. thanks for sharing the knowledge

  • @vaishalikankanala6499
    @vaishalikankanala6499 2 роки тому +2

    Clear and concise. Great work, thank you very much!

  • @coldstone87
    @coldstone87 2 роки тому +1

    This is amazing. Glad I found this on youtube. A million thanks.

  • @harsh2014
    @harsh2014 2 роки тому +1

    Thank for your session, it helped me !

  • @pravakarchaudhury1623
    @pravakarchaudhury1623 2 роки тому +1

    It is really awesome. A million thanks to you.

  • @veerachegu
    @veerachegu 2 роки тому +1

    Really helpful and no institute will come to give training on this thankyou so much

  • @anuradha6892
    @anuradha6892 Рік тому

    Thanks 🙏 it was a great video.

  • @akhilnooney534
    @akhilnooney534 Рік тому +1

    Very Well Explained!!!!

  • @kamrulshuhel7126
    @kamrulshuhel7126 2 роки тому

    Thank you so much for your nice tutorial.
    I will be grateful can you respond, I have some understanding issues -
    while I use condition in step functions workflow - not ($.state == "READY")
    I am getting this error,
    An error occurred while executing the state 'Choice' (entered at the event id #13). Invalid path '$.state': The choice state's condition path references an invalid value.

  • @4niceguy
    @4niceguy 2 роки тому

    Great ! I really appreciate !!!!!

  • @simij851
    @simij851 2 роки тому

    thank you a ton lot for doing this!!!

  • @najmehforoozani
    @najmehforoozani 2 роки тому +1

    Great work

  • @ravitejatavva7396
    @ravitejatavva7396 2 місяці тому

    @AWSTutorialsOnline, Appreciate your good work. AWS glue has evolved so much now, how can we in-corporate data quality checks to the pipelines and send email notifications to the users with dq fail results such as rules_succeeded, rules_skipped, rules_failed and publish the data to a quicksight dashboard. Do we still need step-functions ? Any thoughts / suggestions please.

  • @terrcan1008
    @terrcan1008 2 роки тому +2

    Thanks for your this kind of tutorials,
    Could you please share some of the scenarios for AWS Glue job along with Session as well as for AWS lambda.
    And Also would like to understand the incremental load scenarios in AWS GLUE using HUDI DATASET and other scenarios on same topic

  • @BradThurber
    @BradThurber 2 роки тому +1

    It looks like Step Functions Workflow Studio includes AWS Glue Start Crawler and AWS Glue Get Crawler states. Could these be used directly instead of the lambdas?

  • @PipatMethavanitpong
    @PipatMethavanitpong 2 роки тому +1

    Thank you. This is a nice ELT demo. I wonder how do you handle past extracted and cleaned data.
    Glue jobs are appending write only, so the raw bucket will contain both old and new extracts and the cleaning job will perform on both the old and new.
    I think there should be some logic to separate old files and new files.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 роки тому +1

      You can enable job bookmark on Glue Job and that way the job will not processing already processed data.

    • @PipatMethavanitpong
      @PipatMethavanitpong 2 роки тому

      @@AWSTutorialsOnline sounds nice. I'll check it out. Thank you.

  • @picklu1079
    @picklu1079 2 роки тому +1

    Thanks for the video. If i use step function to orchestrate glue workflows, will that slow the whole process down?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 роки тому

      Please tell me more. Why you want to orchestrate glue workflows?

  • @simij851
    @simij851 2 роки тому

    What would you advise if we have 150 tables to move from mySQL into S3 ( No business transformation- just dump load raw) , to have them all in one step function to run parallelly or create individual pipelines to reduce the risk of if one fails all fails with all being clubbed together.

  • @chatchaikomrangded960
    @chatchaikomrangded960 2 роки тому +1

    Good one.

  • @sriadityab4794
    @sriadityab4794 2 роки тому

    How to handle if there are multiple files dropped in S3 at the same time where we need trigger one glue job using Lambda? I see some limitations where it is throwing error where it can’t trigger multiple files at a time. How should we handle Lambda here? Any help is appreciated.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 роки тому +1

      yeah it is real pain if you drop multiple files at the time of ingestion (in raw layer) and you want glue job to start after all drops have completed. Post raw stage, you can hook into Glue and Crawler events to run the pipeline but at the time of ingestion you rely on S3 file drop based event.
      In such case, based method is to drop a token file after all the files are dropped. S3 event can be configured on put/post event of this token file. Crawler will be configured to exclude token file. Similarly, glue job if doing file based operation will also exclude the token file. Hope it helps.

  • @anmoljm5799
    @anmoljm5799 2 роки тому +1

    my data source is CSV files dropped into an s3 bucket which is crawled, and I trigger the crawler using a lambda to detect when an object has been dropped into the s3 bucket, how do I trigger the start of a pipeline consisting of Glue jobs upon the completion of the first which crawls my source s3 crawler?
    I could use Workflows which is part of Glue but I have a Glue DataBrew job that needs to be part of the pipeline.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 роки тому

      You need to use event based mechanism. I have one tutorial for it. here - ua-cam.com/video/04BbCLDlvII/v-deo.html

    • @anmoljm5799
      @anmoljm5799 2 роки тому

      @@AWSTutorialsOnline Thank you for the reply and the awesome video!

  • @abeeya13
    @abeeya13 24 дні тому

    can we combine batch processing with step function?

  • @rishubhanda1084
    @rishubhanda1084 2 роки тому +1

    Amazing video!! Could you please go over how to build something like this with a CDK? The visual editor is helpful, but I find it easier to provision resources with code.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 роки тому +1

      Hi - yes. Planning CDK video for setting up data platform.

    • @rishubhanda1084
      @rishubhanda1084 2 роки тому +1

      @@AWSTutorialsOnline Thank you so much! I just watched all your videos on Glue and I think the event driven pipeline with EventBridge would be the most helpful.

  • @johnwilliam9310
    @johnwilliam9310 Рік тому +1

    Which one would you recommend in order to automate the ETL process? I have seen the AWS glue workflow video as well and this video is also doing something similar thing which is automating the ETL process. I am not able to decide which one should I use? workflow or step function.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому +1

      Glue Workflow is good for simple workflow of Glue Jobs and Crawlers. However, if you want to build a complex workflow where you want to reuse the same job / crawler and also call other AWS Services then, you should choose Step Functions. Hope it helps.

    • @johnwilliam9310
      @johnwilliam9310 Рік тому +1

      @@AWSTutorialsOnline Thank you for providing clarity to me.

  • @nlopedebarrios
    @nlopedebarrios 6 місяців тому

    Considering the continuous evolution of AWS Glue, what do you think is more suitable for a newbie: orchestrating the ETL pipeline with Glue Workflows or Step Functions?

  • @veerachegu
    @veerachegu 2 роки тому +1

    Really awesome video no where available this content small request can you do the one lab like while daily or hourly fils uploaded in to S3 and trigger the function from S3 to step function pipeline to end of the job

  • @veerachegu
    @veerachegu 2 роки тому +1

    Pls can you explain what job takes place in between raw crawler to cleanse crawler

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 роки тому

      Raw layer is immutable. It presents the data in the format it is ingested. From raw to cleansed layer, you do cleaning operations such as handling missing values, format standardization for data, currency, column naming etc.

  • @anirbandatta2037
    @anirbandatta2037 2 роки тому

    Hi, Could you please share some CICD scenarios using AWS services.

  • @nlopedebarrios
    @nlopedebarrios 6 місяців тому

    If the purpose of the ETL pipeline is to move data around, and the sources, stages and destination are already cataloged, why would you need to run the crawlers after each glue job is finished?

  • @Draco-pu4ro
    @Draco-pu4ro Рік тому +1

    How do we run this like an automated flow in real world? Like in a productionized environment?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому

      You can automate in two ways - event based or schedule based. Event based will be like run StepFunction when data lands in S3 bucket. Schedule based will be run StepFunction at a scheduled time (configured by AWS EventBridge)

  • @user-lq6gc1tw2v
    @user-lq6gc1tw2v Рік тому +1

    Hello, good video. Maybe someone knows when use Glue workflows and when use StepFunctions?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому

      Glue workflow when you want to orchestrate Glue Job and Crawler only. StepFunction when you want to orchestrate Glue Job, Crawler plus other services as well.

  • @veerachegu
    @veerachegu 2 роки тому +1

    One doubt crawler operation is mandatory? To perform raw data to cleanse
    Can we transfer the raw data directly to cleanse with help of glue job

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  2 роки тому

      It is not mandatory but cataloging data at each stage is recommended practice. Each makes data searchable and discoverable at each stage.

  • @veeru2310
    @veeru2310 Рік тому

    Hi sir I am passing glue job arguments in step functions to call parallel glue job operation but unfortunately my job getting success but records not transferred path and destination clear please help me job not taking parameters from step function

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому

      show the syntax you use to pass parameters when calling Glue Job?

    • @veeru2310
      @veeru2310 Рік тому

      @@AWSTutorialsOnline I am going to load 18 tables so I need to pass 18 table parameters right is it good way can you pls suggest me

  • @InvestorKiddd
    @InvestorKiddd Рік тому

    How to create a glue job using aws lambda?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому

      you want to create glue job or run glue job?

    • @InvestorKiddd
      @InvestorKiddd Рік тому

      @@AWSTutorialsOnline create a glue job using aws lambda or aws stepfunction

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому

      @@InvestorKiddd I can probably explain but want to understand more. Generally, people will have job configured and they would like to run it using Lambda / Step Functions. Why you need to create job using Lambda / Step Functions? What is the use case?

    • @InvestorKiddd
      @InvestorKiddd Рік тому

      @@AWSTutorialsOnline so I am scraping some files based on cities, and then I want to convert it into parquet and then use Athena queries to get insights.
      So here I can use same job for mapping and conversion purpose, but input and output path name will be getting changed, like say, input file name is mumbai.csv(city.csv) . So the input path will change when we go for Bangalore.csv , so to solve this issue, my idea was to create a new job for a new city or if we can change input and output path programmatically, then also it is ok for me, I want to automate this process.

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому +1

      @@InvestorKiddd In this case, you should create a job and at run time pass source and destination location as job parameters. Please check my video - I did talk about it in one of them.

  • @user-lq6gc1tw2v
    @user-lq6gc1tw2v Рік тому +1

    Hello, good video. Maybe someone knows when use Glue workflows and when use StepFunctions?

    • @AWSTutorialsOnline
      @AWSTutorialsOnline  Рік тому

      Glue workflow when you want to orchestrate Glue Job and Crawler only. StepFunction when you want to orchestrate Glue Job, Crawler plus other services as well.