22 Optimize Joins in Spark & Understand Bucketing for Faster joins

Поділитися
Вставка
  • Опубліковано 20 сер 2024

КОМЕНТАРІ • 32

  • @anuragdwivedi1804
    @anuragdwivedi1804 5 днів тому

    truly an amazing video

    • @easewithdata
      @easewithdata  4 дні тому

      Thank you 👍 Please make sure to share with your network over LinkedIn 🙂

  • @user-ye2be7kn3o
    @user-ye2be7kn3o 4 місяці тому +2

    very nice , so far best vid for beginners on join

  • @chetanphalak7192
    @chetanphalak7192 5 місяців тому

    Amazingly explained

  • @sureshraina321
    @sureshraina321 7 місяців тому

    Most expected video😊
    Thank you

  • @DEwithDhairy
    @DEwithDhairy 7 місяців тому

    PySpark Coding Interview Questions and Answer of Top Companies
    ua-cam.com/play/PLqGLh1jt697zXpQy8WyyDr194qoCLNg_0.html

  • @prathamesh_a_k
    @prathamesh_a_k 3 місяці тому

    nice explaination

    • @easewithdata
      @easewithdata  3 місяці тому

      Thanks please make sure share with your network on LinkedIn ❤️

  • @Abhisheksingh-vd6yo
    @Abhisheksingh-vd6yo 2 місяці тому

    how 16 partition(task) is created because partition size is 128mb and here we have only 94.8 MB OF DATA
    .. @please explain please

    • @easewithdata
      @easewithdata  2 місяці тому

      Hello
      Number of partitions for data is not only determined using partition size, there are some other factors too
      checkout this article blog.devgenius.io/pyspark-estimate-partition-count-for-file-read-72d7b5704be5

  • @divit00
    @divit00 10 днів тому

    Good stuff. Can you provide me the dataset?

    • @easewithdata
      @easewithdata  10 днів тому

      Thanks 👍 The datasets are huge and its very difficult to upload them. However, you can find most of the at this Github url:
      github.com/subhamkharwal/pyspark-zero-to-hero/tree/master/datasets
      If you like my content, Please make sure to share with your network over LinkedIn 👍 This helps a lot 💓

  • @avinash7003
    @avinash7003 6 місяців тому +1

    high cardinality --- bucketing and low cardinality --- partition?

  • @ahmedaly6999
    @ahmedaly6999 3 місяці тому

    how i join small table with big table but i want to fetch all the data in small table like
    the small table is 100k record and large table is 1 milion record
    df = smalldf.join(largedf, smalldf.id==largedf.id , how = 'left_outerjoin')
    it makes out of memory and i cant do broadcast the small df idont know why what is best case here pls help

    • @Abhisheksingh-vd6yo
      @Abhisheksingh-vd6yo 2 місяці тому

      df = largedf.join(broadcast(smalldf), smalldf.id==largedf.id , how = 'right join') may it will work here

  • @Aravind-gz3gx
    @Aravind-gz3gx 5 місяців тому

    @23:03, the tasks showed only 4 tasks here , usually it will come's up with 16 tasks due to actual config in the cluster, but only 4 tasks is being taken due to the data is being bucketed before reading. Is it correct ?

    • @easewithdata
      @easewithdata  4 місяці тому

      Yes, the bucketing would restrict the number of tasks to avoid shuffling. So it's important to decide number of buckets.

  • @alishmanvar8592
    @alishmanvar8592 2 місяці тому

    Hello Subham, why did not cover Shuffle hash join practically over here? as I can see here you have explained only in theory

    • @easewithdata
      @easewithdata  2 місяці тому

      Hello,
      There is very less chance that some will run into issues with Shuffle Hash Join. The majority of challenges comes when you have optimize Sort Merge which is usually used for bigger datasets. And in case of smaller datasets now a days everyone prefers broadcasting.

    • @alishmanvar8592
      @alishmanvar8592 2 місяці тому

      @@easewithdata suppose we don't choose any join behavior then u meant to say shuffle hash join is by default join?

    • @easewithdata
      @easewithdata  2 місяці тому

      AQE would optimize and choose the best possible join

    • @alishmanvar8592
      @alishmanvar8592 2 місяці тому

      @@easewithdata Hello Subham, can u please come up with session where u can show how can we use delta table (residing on golden layer) for power bi reporting purpose or import into power bi

    • @PrajwalTaneja
      @PrajwalTaneja 23 дні тому

      @@alishmanvar8592 save the table in delta format, open powerBI, load that file and do your visualisation

  • @subhashkumar209
    @subhashkumar209 7 місяців тому

    Hi,
    I have noticed that you use "noop" to perform an action. Any particular reason to not use ".show()" or .display()?

    • @easewithdata
      @easewithdata  7 місяців тому

      Hello,
      show and display doesn't trigger the complete dataset. Best way to trigger complete dataset is using count or write. And for write we are noop.
      This was already explained in past videos of the series. Have a look.

  • @keen8five
    @keen8five 7 місяців тому

    Bucketing can't be applied when the data resides in a Delta Lake table, right?

    • @easewithdata
      @easewithdata  7 місяців тому

      Delta lake tables doesnt supports bucketing. Please avoid using it for the delta lake tables. Try to use other optimization like z ordering while dealing with delta lake tables.

    • @svsci323
      @svsci323 7 місяців тому

      @@easewithdata So, in real-world project bucketing need to be applied on rdbms table or files?

    • @PrajwalTaneja
      @PrajwalTaneja 23 дні тому

      @@svsci323 on dataframes and dataset