22 Optimize Joins in Spark & Understand Bucketing for Faster joins |Sort Merge Join |Broad Cast Join

Поділитися
Вставка
  • Опубліковано 27 жов 2024

КОМЕНТАРІ • 41

  • @NileshPatil-z4w
    @NileshPatil-z4w 7 місяців тому +4

    very nice , so far best vid for beginners on join

  • @DEwithDhairy
    @DEwithDhairy 10 місяців тому +2

    PySpark Coding Interview Questions and Answer of Top Companies
    ua-cam.com/play/PLqGLh1jt697zXpQy8WyyDr194qoCLNg_0.html

  • @sureshraina321
    @sureshraina321 10 місяців тому

    Most expected video😊
    Thank you

  • @adulterrier
    @adulterrier 5 днів тому

    First of all, a big kudo!
    Fun fact: added up the times on my cluster. Two bucket writes were 7 and 11s, unbucketed join was 40s, the bucketed was 15s. So 7+11+15 = 33 is less than 40. It looks like it pays out to bucket the data first, right?

    • @easewithdata
      @easewithdata  4 дні тому

      Awesome observation 👏 Yes if you data is already in buckets as per the required keys of join, you will see significant improvements with joins. And it seems you already figured that with your experiment 👍
      In case you like my content please make sure to share with your network over LinkedIn 🙂

  • @chetanphalak7192
    @chetanphalak7192 7 місяців тому

    Amazingly explained

  • @anuragdwivedi1804
    @anuragdwivedi1804 2 місяці тому

    truly an amazing video

    • @easewithdata
      @easewithdata  2 місяці тому

      Thank you 👍 Please make sure to share with your network over LinkedIn 🙂

  • @NiteeshKumarPinjala
    @NiteeshKumarPinjala Місяць тому

    Hi Subham, one quick question.
    Can we Un broadcast the broadcasted dataframe? We can Un cache the cached dataset right, in the sameway can we do un broadcasting?

    • @easewithdata
      @easewithdata  Місяць тому

      If you dont want to broadcast a joined dataframe then, suppress it setting spark.sql.autoBroadcastJoinThreshold to -1

  • @adulterrier
    @adulterrier 5 днів тому

    Increased the buckets number to 16 and got the join in 3 secs, while writing buckets was 3 and 6 seconds. Can I draw any conclusions from this?

    • @easewithdata
      @easewithdata  4 дні тому

      Yes, increasing the number of buckets will increase performance as the tasks executing joins will also increase. But only thing to keep is mind is small file issue. Dont make too many buckets leading to small files.

  • @Aravind-gz3gx
    @Aravind-gz3gx 7 місяців тому

    @23:03, the tasks showed only 4 tasks here , usually it will come's up with 16 tasks due to actual config in the cluster, but only 4 tasks is being taken due to the data is being bucketed before reading. Is it correct ?

    • @easewithdata
      @easewithdata  7 місяців тому

      Yes, the bucketing would restrict the number of tasks to avoid shuffling. So it's important to decide number of buckets.

  • @Abhisheksingh-vd6yo
    @Abhisheksingh-vd6yo 4 місяці тому

    how 16 partition(task) is created because partition size is 128mb and here we have only 94.8 MB OF DATA
    .. @please explain please

    • @easewithdata
      @easewithdata  4 місяці тому

      Hello
      Number of partitions for data is not only determined using partition size, there are some other factors too
      checkout this article blog.devgenius.io/pyspark-estimate-partition-count-for-file-read-72d7b5704be5

  • @ahmedaly6999
    @ahmedaly6999 6 місяців тому

    how i join small table with big table but i want to fetch all the data in small table like
    the small table is 100k record and large table is 1 milion record
    df = smalldf.join(largedf, smalldf.id==largedf.id , how = 'left_outerjoin')
    it makes out of memory and i cant do broadcast the small df idont know why what is best case here pls help

    • @Abhisheksingh-vd6yo
      @Abhisheksingh-vd6yo 4 місяці тому

      df = largedf.join(broadcast(smalldf), smalldf.id==largedf.id , how = 'right join') may it will work here

  • @prathamesh_a_k
    @prathamesh_a_k 5 місяців тому

    nice explaination

    • @easewithdata
      @easewithdata  5 місяців тому

      Thanks please make sure share with your network on LinkedIn ❤️

  • @avinash7003
    @avinash7003 9 місяців тому +1

    high cardinality --- bucketing and low cardinality --- partition?

  • @alishmanvar8592
    @alishmanvar8592 4 місяці тому

    Hello Subham, why did not cover Shuffle hash join practically over here? as I can see here you have explained only in theory

    • @easewithdata
      @easewithdata  4 місяці тому

      Hello,
      There is very less chance that some will run into issues with Shuffle Hash Join. The majority of challenges comes when you have optimize Sort Merge which is usually used for bigger datasets. And in case of smaller datasets now a days everyone prefers broadcasting.

    • @alishmanvar8592
      @alishmanvar8592 4 місяці тому

      @@easewithdata suppose we don't choose any join behavior then u meant to say shuffle hash join is by default join?

    • @easewithdata
      @easewithdata  4 місяці тому

      AQE would optimize and choose the best possible join

    • @alishmanvar8592
      @alishmanvar8592 4 місяці тому

      @@easewithdata Hello Subham, can u please come up with session where u can show how can we use delta table (residing on golden layer) for power bi reporting purpose or import into power bi

    • @PrajwalTaneja
      @PrajwalTaneja 3 місяці тому

      @@alishmanvar8592 save the table in delta format, open powerBI, load that file and do your visualisation

  • @keen8five
    @keen8five 10 місяців тому

    Bucketing can't be applied when the data resides in a Delta Lake table, right?

    • @easewithdata
      @easewithdata  9 місяців тому

      Delta lake tables doesnt supports bucketing. Please avoid using it for the delta lake tables. Try to use other optimization like z ordering while dealing with delta lake tables.

    • @svsci323
      @svsci323 9 місяців тому

      @@easewithdata So, in real-world project bucketing need to be applied on rdbms table or files?

    • @PrajwalTaneja
      @PrajwalTaneja 3 місяці тому

      @@svsci323 on dataframes and dataset

  • @subhashkumar209
    @subhashkumar209 9 місяців тому

    Hi,
    I have noticed that you use "noop" to perform an action. Any particular reason to not use ".show()" or .display()?

    • @easewithdata
      @easewithdata  9 місяців тому +1

      Hello,
      show and display doesn't trigger the complete dataset. Best way to trigger complete dataset is using count or write. And for write we are noop.
      This was already explained in past videos of the series. Have a look.

  • @divit00
    @divit00 2 місяці тому

    Good stuff. Can you provide me the dataset?

    • @easewithdata
      @easewithdata  2 місяці тому

      Thanks 👍 The datasets are huge and its very difficult to upload them. However, you can find most of the at this Github url:
      github.com/subhamkharwal/pyspark-zero-to-hero/tree/master/datasets
      If you like my content, Please make sure to share with your network over LinkedIn 👍 This helps a lot 💓

    • @divit00
      @divit00 Місяць тому

      @@easewithdata For me, even with AQE disabled it's doing broadcast join. What could be the reason? I have used your dataset and code.
      Spark 3.3.2
      df_joined = emp.join(broadcast(dept), on=emp.department_id==dept.department_id, how="left")
      df_joined = emp.join(broadcast(dept), on=emp.department_id==dept.department_id, how="left")
      df_joined.explain()
      == Physical Plan ==
      *(2) BroadcastHashJoin [department_id#7], [department_id#58], LeftOuter, BuildRight, false

    • @NiteeshKumarPinjala
      @NiteeshKumarPinjala Місяць тому

      @@divit00 i guess post spark 2.0 by default data less than 10MB is broadcasted and join operation will be sort merge.

    • @divit00
      @divit00 Місяць тому

      @@NiteeshKumarPinjala but his Spark is greater than 2.0 and we are setting autobroadcast to 0