AWS GLUE Complete ETL Project Demo| Load Data from AWS S3 to Amazon RedShift(Data engineer Project)

Поділитися
Вставка
  • Опубліковано 27 тра 2023
  • #AWS GLUE
    Complete ETL project which used S3,AWS Glue, Pyspark, Athena, Redshift and also scheduler .
    we create Glue Crawler ,Glue ETL script and design automatic workflow. you learn complete workflow.
    Code available in below GitHub link.
    github.com/saurabhgarg013/My_...
    Amazon Glue Tutorial
    complete Amazon Glue Project Tutorial in Hindi and English mix.
    this video is long but very useful. you can learn how to write Glue ETL script
    which read data from s3 and insert into redshift using AWS GLUE PYSPARK
    and along with that you will learn about Glue Pyspark Code.
    Below are the topics covered in the video:
    AWS Crawler
    AWS Glue ETL script
    AWS Glue Workflow
    AWS Glue with Redshift
    AWS GLUE Concept
    AWS Glue: Read CSV Files From AWS S3 Without Glue Catalog
    AWS Glue: Insert data into Redshift Without Glue Catalog
    AWS s3 + Glue +Athena
    AWS s3 + Glue +Redshift
    Pyspark Concept
    Glue Dynamic Frame concept
    Components of AWS Glue
    Data catalog
    Database
    Crawler and Classifier
    Glue Job
    Trigger and workflow
    Troubleshoot the AWS Glue error "VPC S3 endpoint validation failed
    Setting up an S3 VPC gateway endpoint
    To set up an S3 VPC gateway endpoint, follow these steps:
    Open the Amazon VPC console.
    In the navigation pane, choose Endpoints.
    Choose Create Endpoint.
    For Service Name, select com.amazonaws.us-east-1.s3. Be sure that the Type column indicates Gateway.
    Note: Be sure to replace us-east-1 with the AWS Region of your choice.
    For VPC, select the VPC where you want to create the endpoint.
    For Configure route tables, a route to the S3 VPC endpoint is automatically added.
    For Policy, leave the default option Full Access.
    Choose Create Endpoint.
    AWS GLUE ETL AND REDSHIFT RELATED DATA ENGINEERING VIDEOS.
    • Create Redshift Cluste... (Create Redshift Cluster and Load Data using Python)
    • AWS Glue ETL with Pyth... (AWS Glue ETL with Python shell |Read data from S3 and insert Redshift)(Not using Pyspark with glue)
    • AWS GLUE Complete ETL ... (AWS GLUE Complete ETL Project Demo| Load Data from AWS S3 to Amazon RedShift)(Data engineer Project)
    • Redshift using Python|... (Redshift using Python| Load and insert and copy data into redshift using psycopg2 )
    • Aws Redshift tutorial ... (Aws Redshift tutorial |Amazon Redshift Architecture | Data Warehouse Concept)
    • Building ETL Pipeline ... (Building ETL Pipeline using AWS Glue and Step Functions)
    • AWS GLUE CRAWLER TUTOR... (AWS GLUE CRAWLER TUTORIAL with DEMO| Learn AWS GLUE)
    • AWS GLUE CONCEPT|GLUE ... (AWS GLUE CONCEPT|GLUE DATA CATALOG|GLUE TUTORIAL)
    AWS ATHENA AND LAMBDA RELATED
    • ATHENA COMPLETE TUTORI... (ATHENA COMPLETE TUTORIAL WITH DEMO |AWS ATHENA TABLE PARTITION|DEMO)
    • How to run Athena quer... (How to run Athena query from AWS Lambda DEMO|AWS ATHENA FROM LAMBDA)
    • Athena using Python f... (Athena using Python for Beginners)
    In case of any query, you can contact us directly on WhatsApp 8800502668
    and you can write mail technodevs13@gmail.com
    Request you to subscribe my channel Techno Devs with Saurabh and Press Bell icon & get regular updates on videos.
  • Наука та технологія

КОМЕНТАРІ • 148

  • @devanshpandey9098
    @devanshpandey9098 3 місяці тому +6

    it is the one and only one video on the youtube through which you can understand whole concept of Athena,Redshift,Glue and S3 very easily.

  • @akshaymuktiramrodge3233
    @akshaymuktiramrodge3233 8 місяців тому +7

    I don't know how to thank you...believe me this is one of the finest explanation of Glue on internet and also includes Athena and redshift. Thank you so much Saurabh . @Techno Devs with Saurabh

  • @vinodsagar2412
    @vinodsagar2412 10 місяців тому +4

    Sir.... No one in youtube will share this kind knowledge, u are really A God Of AWS.... thanks a lot .... guruji

  • @Lakshvedhi
    @Lakshvedhi 11 місяців тому +1

    One of the best video I saw on UA-cam about this topic. Thank you so much. Please make lot more videos on AWS data engineering.

  • @johnsonrajendran6194
    @johnsonrajendran6194 3 місяці тому +1

    No other video is as in-depth as yours.
    Thanks for sharing Sir!!

  • @user-el7qo3wf2k
    @user-el7qo3wf2k 11 місяців тому +1

    amazing content.
    i can say that no one has explained in such detail. thanks a lot

  • @vaibhavvasani2401
    @vaibhavvasani2401 8 місяців тому +1

    Thank you so much for the detailed tutorial.. covered fully from start to end

  • @user-ek8ro7my5v
    @user-ek8ro7my5v 5 місяців тому +1

    Hats off for you, Sir. Explanation was marvelous!

  • @pushkarratnaparkhi2205
    @pushkarratnaparkhi2205 8 місяців тому +1

    Explained in great detail. Thanks a lot for your efforts.

  • @adityaverma4770
    @adityaverma4770 10 місяців тому +1

    it is truly a gem ...dil se shukriya!!

  • @sajalhsn13
    @sajalhsn13 4 місяці тому +1

    Very in depth hands-on lesson. Greatly appreciate your hard work. Keep doing the good job. 💚

  • @cloudandsqlwithpython
    @cloudandsqlwithpython 11 місяців тому

    This is excellent video on AWS...great work sir

  • @shatamjeevdewan1142
    @shatamjeevdewan1142 Місяць тому +1

    Thank you Saurabh. it was a great video and really admire and liked your soft voice in Hindi

  • @sayedsamimahamed5324
    @sayedsamimahamed5324 7 місяців тому +1

    Simply Wow. My hand automatically moved to the like and subscribe button even before completing the video. This video is a pure gem. Thank you for your knowledge. I will share you some more ideas to create video. Jate jate phirse apko bohot bohot dhanyawaad.

  • @itronicsolutions
    @itronicsolutions 7 місяців тому +1

    Really Very Helpful session with deep knowledge . Thank you so much for this. Please keep it up.

  • @Stardust_Tech_Studio
    @Stardust_Tech_Studio Рік тому +2

    Thanks Saurabh ... Your way of explanation and also your knowledge level is 10/10.
    I personally shared this video with approx. to 20 peoples.

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Thanks bro. I am really motivated.

  • @pkumar009
    @pkumar009 10 місяців тому +1

    It's a very helpful saurabh. Thank you for sharing this type video😊

  • @saurabh2898
    @saurabh2898 6 місяців тому

    crystal clear understanding ... GREAT

  • @javeedmohammedabdul9274
    @javeedmohammedabdul9274 5 місяців тому

    The way you explain step by step is just wow

  • @user-lk2dz3mw9h
    @user-lk2dz3mw9h 7 місяців тому +1

    great demo worthwhile to watch and learn from you sir

  • @shanmugagpa5863
    @shanmugagpa5863 10 місяців тому +1

    Really intuitive. Very well explained.

  • @waseemswork
    @waseemswork 9 місяців тому +1

    Very nice and detailed explanation, very much helpful for all..

  • @nitinsalunkhe5000
    @nitinsalunkhe5000 Місяць тому

    Thanks a lot for this wonderful content. It has really helped me.

  • @cricket-veda
    @cricket-veda Місяць тому +1

    One ऑफ the best video ❤

  • @thepositionalplay5307
    @thepositionalplay5307 3 місяці тому +1

    Top class video. Really great content.

  • @ankushchavhan_
    @ankushchavhan_ Місяць тому +1

    Great Session

  • @KundanKumar-ce4sc
    @KundanKumar-ce4sc Місяць тому

    it's fantasctic sir,, great

  • @pradeepkumarsahu7031
    @pradeepkumarsahu7031 Рік тому +1

    Very helpful thanks for making video

  • @Aditya.barnwal777
    @Aditya.barnwal777 5 місяців тому +1

    I can see 1-1 reply. Salute the effort.

  • @AnkitaKulshreshtha
    @AnkitaKulshreshtha День тому

    Very informative 🎉🎉 Thank you

  • @haridasbhoite6545
    @haridasbhoite6545 11 місяців тому +1

    really amazing video ever !

  • @Anish-sv4qz
    @Anish-sv4qz 8 місяців тому +1

    amazing content! thank you

  • @maheshg82
    @maheshg82 Місяць тому

    Amazing video

  • @parikshitpatil7689
    @parikshitpatil7689 Рік тому +1

    Thank you so much for the video brother ! I wl help many people like me 😊

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Thanks bro for complement. Yup this video is very informative and please also forward to your friends and help me to get more subscribers

  • @aka_rony
    @aka_rony 5 місяців тому +1

    excellent explanation sir

  • @mrinaldhawan3959
    @mrinaldhawan3959 5 місяців тому +1

    Brilliant Tutorial

  • @RAHULKUMAR-px8em
    @RAHULKUMAR-px8em 8 місяців тому +4

    Please Sir make a Playlist for all Services of stepwise services, Those are used in the AWS Data Engineer; Because New Joiner of your channel are confused which learn first and which learn next after this

  • @AshisRaj
    @AshisRaj 9 місяців тому +1

    its really worthful to spend 2 hrs here.

  • @PawanSharma-dd7jv
    @PawanSharma-dd7jv Рік тому +1

    Nice explnation bro. Liked it. Waiting for more videos on Glue ETL scenarios.

  • @deepeshjaiswal8733
    @deepeshjaiswal8733 5 місяців тому

    Really nice session ❤

  • @entertainmenttv1672
    @entertainmenttv1672 4 місяці тому

    Well explained video, thanks alot

  • @Mandkar
    @Mandkar 3 місяці тому

    Thank you😊

  • @ripponmangaraj782
    @ripponmangaraj782 Рік тому +1

    Thank you !

  • @sachindubey4315
    @sachindubey4315 8 місяців тому

    video is awsome, mentioned each and every points from scrach, just completed with hands on project if you can share the PPT also then it will be grate .

  • @shubhamdhakne3457
    @shubhamdhakne3457 11 місяців тому +1

    nice video really awesome and great Knowledge

  • @Rohit-Hs
    @Rohit-Hs Рік тому +1

    Great info saurabh!

  • @akhilendra1975
    @akhilendra1975 2 місяці тому +1

    excellent explaining , but i request you to make the video of small duration if more than an hour , better to split the video in 2 or 3 parts

  • @harshitanamiwal8962
    @harshitanamiwal8962 Рік тому

    Really Nice video

  • @rahimkhan9041
    @rahimkhan9041 Рік тому

    Sound good 💯

  • @priyakhandal2789
    @priyakhandal2789 Рік тому

    Awesome 😍

  • @rahulpamnani6574
    @rahulpamnani6574 Рік тому +1

    Nice video. Please keep making videos on AWS services.

  • @kunalkhandal6467
    @kunalkhandal6467 Рік тому

    Fire 🔥🔥🔥🔥🔥

  • @ChetanSharma-oy4ge
    @ChetanSharma-oy4ge 11 місяців тому +1

    Please make a playlist where you add these videos step by step i mean which videos should i follow first before jumping it to in first place.

  • @amritranjannayak2705
    @amritranjannayak2705 2 місяці тому

    Thanks for the video, I have a question in glue job at last step why we need to convert to gluednamicframe again we can directly store from spark dataframe right..?

  • @Aditya.barnwal777
    @Aditya.barnwal777 5 місяців тому

    Nice !

  • @achintyadutta8066
    @achintyadutta8066 4 місяці тому

    Thank you very much of such kind of informative video. You created 1st crawler to crawl the data present in S3 bucket to infer the schema of the data, but you have created the 2nd crawler to crawl the table structure in Redshift. So, can we create crawler for both the purpose: crawling the data and crawling the table structure?

  • @tmaroofa
    @tmaroofa 6 місяців тому +1

    Very useful video sir, could you please make a video on AWS datalake

  • @anusinghchoyal7772
    @anusinghchoyal7772 Рік тому

    Osm editing

  • @user-qb6py7yc6v
    @user-qb6py7yc6v 7 місяців тому

    Please arrange a videos in series, it is difficult for beginners to choose which one to watch first

  • @gurmitchada280
    @gurmitchada280 Рік тому

    🔥🔥🔥🔥🔥🔥

  • @kamalsain8084
    @kamalsain8084 Рік тому

    Good ,😍😍😍😍

  • @nitlover8319
    @nitlover8319 Місяць тому

    Thanks a ton for this session bhai. Can you help to share PPT for the session.. would be really helpful. Really appreciate thanks again 🙏

  • @inderjeetdooth2633
    @inderjeetdooth2633 Рік тому

    😍😍😍😍

  • @AnuragTiwari-vw9ue
    @AnuragTiwari-vw9ue 3 місяці тому +1

    Really Amazing from heart

  • @saurabhgavande6728
    @saurabhgavande6728 4 місяці тому +1

    sir is this a project that you have worked on in real time/ job or is it for just practice,
    because i am looking for a job change and i want to add a project so i was curious if i can add this as i have 2 years of experience

  • @user-rp3mw1tj3s
    @user-rp3mw1tj3s 3 місяці тому

    If we have multiple parqet files in the output bucket then all the file versions will be come as duplicates. Could you please help how to control that?

  • @paritoshec
    @paritoshec Рік тому +1

    SUPERB SUPERB SUPERB

  • @jjawal
    @jjawal Рік тому +1

    Very well explained . Very helpfull. Definately a good learning for newbie like me.
    I have one question related to Data like if we have multiple null or empty rows in diff column . How can we handle this if we have large dataset.

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Thanks for watching. you can use Filter transformation to remove rows that contain null or empty values

  • @AkshanshBaliyan-is8xz
    @AkshanshBaliyan-is8xz 3 місяці тому

    Failed to test connection MyRedshiftConnection due to FAILED status.
    Getting the above error while testing connection to redshift in glue.

  • @gokulbaniwal2295
    @gokulbaniwal2295 Рік тому

    harvesting poison videos 😍

  • @user-zu9lw3uc3p
    @user-zu9lw3uc3p Рік тому +1

    Good job Saurabh. Very helpful.
    I have 1 question, in the pipeline described, suppose we schedule the pipeline to run daily once. Do we also need to run the crawlers daily, or just run crawlers only once at start and the remaining pipeline without crawlers on daily basis. Run crawlers only when input data schema changes?
    Also, can you make another video explaining how the transformation code {glue / spark} is connected and maintained using Git in real world projects. Like- create a pipeline, upload it to GIT. Checkout from Git, modify code and push it back to GIT. The next time pipeline runs, it picks up the latest code from GIT.
    Thanks

    • @TechnoDevs
      @TechnoDevs  Рік тому +4

      thanks for watching my video. If your data sources are updated frequently, and you need to capture those changes on a daily basis, you can schedule the AWS Glue crawler to run once a day. This ensures that your metadata and schema information stay up to date. I will create pipeline video in future.

  • @AkshanshBaliyan-is8xz
    @AkshanshBaliyan-is8xz 3 місяці тому

    Failed to test connection MyRedshiftConnection due to FAILED status.
    Getting this error message when doing Test Connection in Data Connection in Glue. Pls help!

  • @AJEETKUMAR-yj8tv
    @AJEETKUMAR-yj8tv 10 місяців тому

    Hi saurbh I have created one MySQL instance and also created few table with sample data ,then I have created database in data catalog and now I want create connection of database in AWS glue then it is throwing error like invalid parameter , I am unable to fix this error, pls help me to fix this error

  • @mdamirnazi
    @mdamirnazi 2 місяці тому

    Yeh topics solution architect me aata hain kya

  • @pradeepkumarsahu7031
    @pradeepkumarsahu7031 Рік тому +1

    Please make a vedio for glue transformation

  • @AbhishekJain-wl3st
    @AbhishekJain-wl3st 2 місяці тому

    I am getting an access denied error while creating crawler can some one please help me with this

  • @sudippandit7051
    @sudippandit7051 11 місяців тому +1

    Super video! Very helpful. 😊

  • @user-jb9jr5tl3v
    @user-jb9jr5tl3v Рік тому +1

    Hey saurabh, Very well explained its very very useful thank you so much almost I watched every vedio about glue etl project but no explained like. I have question here what are parameters and why we use that?

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Thanks for watching my video. I didn't get for which parameters are you talking. please give some more reference. you can also contact directly on WhatsApp 8800502668
      and on mail technodevs13@gmail.com with questions

    • @user-jb9jr5tl3v
      @user-jb9jr5tl3v Рік тому

      job parameters in aws glue console there is option for job parameters. What is purpose of it

    • @TechnoDevs
      @TechnoDevs  Рік тому

      In AWS Glue, job parameters allow you to pass custom values
      to your ETL (Extract, Transform, Load) job at runtime.
      configure a job through the AWS CLI
      suppose you want to run job using cli and want to pass location then
      you need to use parameter --scriptLocation
      $ aws glue start-job-run --job-name "CSV to CSV" --arguments='--scriptLocation="s3://my_glue/libraries/test_lib.py"'
      IN Glue ETL script, if you want to check
      goto JOB_parameter option in console and not select any value from dropdown
      You can give key --my_param and set any value which you want to use in script in runtime like filename, bucket name.
      eg::key --my_param value Hello
      args = getResolvedOptions(sys.argv, ['JOB_NAME','my_param'])
      print("The value is: " , args['my_param'])
      // Hello message print by print statement.
      JOB_NAME - Internal argument to AWS Glue. Do not set.
      you can see output in cloudwatch location /aws-glue/jobs/output
      I hope you get your answer.
      thanks

  • @snrmedia8965
    @snrmedia8965 6 місяців тому +1

    Great tutorial❤ where to pdf?

  • @manishadavi2350
    @manishadavi2350 Рік тому

    🤗🤗🤗🤗🤗

  • @user-ft5ow9mb5z
    @user-ft5ow9mb5z Рік тому +1

    Hey Saurabh, nice video. Can you produce new video by using step function , without crawler. ETL S3,Glue,Redshift by using step function.

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Sure bro..

    • @TechnoDevs
      @TechnoDevs  Рік тому

      video created Glue with step function ua-cam.com/video/0lWPZbPQb7w/v-deo.html

  • @jotijhoda9053
    @jotijhoda9053 Рік тому

    Not bad

  • @user-by9lm9gv3z
    @user-by9lm9gv3z Рік тому +1

    Hi bro,
    Aap or banaao aws par please ,
    ache ache project aapse sekhne ko milenge , please bhai banaao

  • @purabization
    @purabization Рік тому +1

    Amazing video saurabh, superb explanation with proper flow, I tried the same thing by reading data from RDS instance and loading data to s3 using glue catalog but I am getting the part-r file in my target s3 bucket. can you tell me the reason.... thanks in advance.

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Thanks for watching my video. I think you are talking of partitioned files (.part file extension) . AWS Glue for efficient data processing and optimizing storage
      and reducing costs by creating relevant partitions. write.partitionBy method is used to write the data to S3 in a partitioned format and if you don't want partition than you use below method.
      # Write data to S3 without partitioning
      data_frame.write.parquet("s3://my_bucket/output_data/")

    • @purabization
      @purabization Рік тому +1

      @@TechnoDevs thank you for the reply, No can you please make a video on load data from mysql rds to s3 using glue as i followed the same way but unable to load the data into the s3 bucket

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Sure Bro...

  • @aryanvik5614
    @aryanvik5614 8 місяців тому +1

    Hi Saurabh, I have one question, in real time projects we use CloudFormation right to create, update, and delete AWS resources in a safe and predictable manner.

    • @TechnoDevs
      @TechnoDevs  8 місяців тому +1

      In many real-time projects, a combination of both approaches is used. You might use the Management Console for initial setup and testing, then transition to CloudFormation templates as your deployment becomes more complex and production-ready.

    • @aryanvik5614
      @aryanvik5614 8 місяців тому

      Thanks for the response

  • @abhimanyukakde849
    @abhimanyukakde849 Рік тому +2

    Hi Saurabh Sir Abhimanyu This Side I have 1 question, How-to-receive-notifications-when-your-Glue-ETl-scripts-fail-Email-Alerts?

    • @TechnoDevs
      @TechnoDevs  Рік тому +1

      thanks for watching my video. you can watch this video ua-cam.com/video/0lWPZbPQb7w/v-deo.html . In event bridge call SNS notification service

  • @saurabhgavande6728
    @saurabhgavande6728 4 місяці тому

    how can we automate this

  • @user-on6vq5ee9d
    @user-on6vq5ee9d 5 місяців тому

    my data record is 5000 records in Excel , they are not display in Athnea they give errror

  • @user-wn4zk1gy6g
    @user-wn4zk1gy6g 7 місяців тому

    How to validate data in this system?

  • @user-by9lm9gv3z
    @user-by9lm9gv3z Рік тому

    getting this error when i ran query on Athena :
    No output location provided. An output location is required either through the Workgroup result configuration setting or as an API input.

    • @TechnoDevs
      @TechnoDevs  Рік тому

      you need to set up an Athena output folder when you query for the first time in Athena
      You need to just give any bucket folder location where your output would be save.
      Configure the output location: In the "Query result location" section in Athena

    • @user-by9lm9gv3z
      @user-by9lm9gv3z Рік тому

      @@TechnoDevs It worked

  • @vaibhavverma1340
    @vaibhavverma1340 11 місяців тому +1

    Hello sir, what if the crawler create duplicate columns in aws glue

    • @TechnoDevs
      @TechnoDevs  11 місяців тому +1

      If your data source has inconsistent or repeated headers, the crawler might interpret them as separate columns. Ensure that your data source is well-formatted

  • @swapnilwagh4725
    @swapnilwagh4725 Рік тому +1

    Hey Saurabh, nice video.
    Is there any Github link where we can access the script used in this video for glue transformations?

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Sure bro I will give today.

    • @TechnoDevs
      @TechnoDevs  Рік тому

      github.com/saurabhgarg013/My_glue_project/

  • @Sunnykhatnani
    @Sunnykhatnani 2 місяці тому

    ye jo data liya hai ye kha se liya hai?

  • @alahisahi6762
    @alahisahi6762 11 місяців тому

    why we did not change the type by converting it to a data frame and casting it as int ?????

    • @TechnoDevs
      @TechnoDevs  11 місяців тому

      I will update you bro after check

  • @deepanshuaggarwal7042
    @deepanshuaggarwal7042 Місяць тому

    Does AWS Athena takes extra storage to show data into table? If yes, how does it cost us ?

    • @TechnoDevs
      @TechnoDevs  Місяць тому +1

      AWS Athena does not require additional storage to show data in a table because it queries data directly from Amazon S3. and Athena itself doesn't store data, the data you query must be stored in Amazon S3. You will incur standard S3 storage costs for the data stored there.

  • @himimanichodari7686
    @himimanichodari7686 Рік тому

    what video you made

  • @omkarm7865
    @omkarm7865 Рік тому

    while creating connection to Redshift, I am not getting JDBC URL of redshift in dropdown
    .... at 1.21 time

    • @TechnoDevs
      @TechnoDevs  Рік тому

      please make sure select connection type Amazon redshift. Than you will get redshift URL in drop down. also before that you should need to create redshift cluster.

    • @omkarm7865
      @omkarm7865 Рік тому

      @@TechnoDevs I have created Redshift cluster, created Iam role and selecting Redshift as well but still it's not showing

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Did you make your cluster public accessible.

    • @omkarm7865
      @omkarm7865 Рік тому

      @@TechnoDevs earlier it was not...just now I did but still it's not coming in drop-down

    • @TechnoDevs
      @TechnoDevs  Рік тому

      Please check that AWS Glue and Redshift should be configured to operate within the same Virtual Private Cloud (VPC)
      security group which attached with redshift.
      that Redshift security group to allow AWS Glue access, you need to set the following parameters:
      Type: Custom TCP Rule
      Protocol: TCP
      Port Range: 5439 (the default Redshift port).
      and ensure that the cluster is in an active
      I hope it will solve your problem.

  • @ankushchavhan_
    @ankushchavhan_ Місяць тому

    Sir , Can I get the ppt file ?

  • @ishitagoel7412
    @ishitagoel7412 4 місяці тому

    My Second ETL JOB is showing error again and again - the specified bucket does not exist

    • @TechnoDevs
      @TechnoDevs  4 місяці тому

      Kindly check bucket region should be same as etl glue job region

    • @ishitagoel7412
      @ishitagoel7412 4 місяці тому

      @@TechnoDevs my s3 bucket location is Asia Pacific Mumbai and I did not put any location for etl job, how to check it's location

  • @pankajrathod5906
    @pankajrathod5906 3 місяці тому

    sir please provide me pdf