Great Video with nice explanation! Is there any list which help us to identify which exchanges performed by spark and by the codebase? Is it available on spark documentation link?
Quick question - For Stage 1 and Stage 4, why is Spark using 6 partitions ? How does Spark derive the number of partitions which it will use to create a dataset or read an existing dataset ?
@@rockthejvm Hi Daniel, how can I find the number of cores my cluster has (thru Spark UI etc) and how many cores are being used by my application. Also, could you pls let me know if I can increase the number of cores from the %sql prompt ?
Check out the video on the DAGs! If you're interested in learning about the Spark UI in detail, the Spark Essentials course on the site does exactly that: rockthejvm.com/p/spark-essentials
This material hits the sweet spot for me. Thank you.
Great video. I feel more comfortable now reading query plans. It's still tough!
best video on spark ever
Great Explanation Daniel :)
I have a request for a similar video on Spark UI Please
@Daniel, an absolutely genuine content and in a very straight forward and step by step explanation, Thank you so much for this.
You make us see the things simpler. Great job!
mindblown! great content, and indepth explanation!
Great Video. Explaining complex concepts so simply. Fantastic!!
Awesome videos.. looking forward to the next one..
Subscribed😍 Wishing you all the best nice explanation daniel
Thank you for clear cut explanation ☺️
Superb Video! Query plans looked difficult to me before, but you made it simple. Thanks!!!
That's my goal - share it with other people that might find it useful!
Nicely explained
Precise explanation.
Can you also please tell how to make a full outer join in NON-ANSI way.
Just amazing! Keep it up :)
Well done daniel. It was nice to know explain.
Thanks for the quality content, very very helpful 👌
A very good explaination !
Glad you liked it!
You are so helpful, thank you so much for the explanation!
wonderful explanation!!
@Daniel FYI In the description Link "Written form to keep for later" is broken
Fixed - there was a / at the end dammit
nice and crisp explanation
That's my goal!
Very Nice explanation Daniel. Is it possible to walk us through a real world use-case end to end using SPARK SQL with mild complexity.
Will do!
awesome videos !
keep them going
More incoming!
very nicely explained, great video
Awesome video
Great Video with nice explanation! Is there any list which help us to identify which exchanges performed by spark and by the codebase? Is it available on spark documentation link?
Spark docs usually mention which transformations involve a shuffle. Major examples include joining, sorting and grouping.
Great Video!
Glad it helps!
Quick question - For Stage 1 and Stage 4, why is Spark using 6 partitions ? How does Spark derive the number of partitions which it will use to create a dataset or read an existing dataset ?
Unless previously determined, Spark will take the number of partitions equal to the number of cores on your machine.
@@rockthejvm Hi Daniel, how can I find the number of cores my cluster has (thru Spark UI etc) and how many cores are being used by my application. Also, could you pls let me know if I can increase the number of cores from the %sql prompt ?
good one
Hi, Could you please make video on SPARK UI
Check out the video on the DAGs! If you're interested in learning about the Spark UI in detail, the Spark Essentials course on the site does exactly that:
rockthejvm.com/p/spark-essentials
Awesome