Hi Raja Sir, The contents are very good in this video and playlist. But not able to understand the sequence to follow as the numbers are missing in serial numbers you given. Also playlist has 65 videos but the serial numbers are above 100 also, can you pl help with sequencing of videos to go through the playlist.
A doubt: As you said, ultimately spark converts dataframes into RDDs while processing. Then how the benefits like avoiding GC-process and others will eventually comes into play while using DFs instead of RDDs? I'm fairly new in this area. And thanks for this playlist.
As per your slide for the Differences among the RDD, Dataframe and Dataset- you mentioned the supported language for Dataframe is Java, Scala, Python and R. What about the SQL for these. Could you please clarify on this Raja. If possible.
Hi Sandani, good question. RDD is native api for Spark. So whatever we use dataset or dataframe, it would be internally converted to RDD. But rdd is quite outdated for programming nowadays. Dataframe is widely used across projects due to developer convenience. Would recommend to go with dataframe. Dataset has limitations with programming languages. For detailed information, please refer this video ua-cam.com/video/g4T25_4HGM0/v-deo.html
RDD is not type safety right? they don't enforce datatype; This means that the type of the data in an RDD can change at runtime. This can lead to errors if the data is not properly checked.
Yes dataset and spark SQL also uses catalyst optimizer. Optimization means catalyst optimizer. In the previous slide, mentioned that dataset consolidates best features from both rdd and dataframe
No, dataframes are weak type safety, whereas rdd and datasets are strong type safety. For spark engine, dataframe is collection of rows (not individual columns) so it can't validate the column data type during compile time. So it is not strong type safety. Hope you understand. Pls refer spark documentation to know more about type safety
Yes you can do df=df.select but it does not mean that dataframe is mutable. What happens internally is previous dataframe is dropped and another new df is created based on lazy evaluation, not the previous df is getting modified. Dataframe is always immutable
The best Data Engineering course in youtube. Thanks a lot bro for your effort and that to free of cost. Really proud of you!
You are most welcome
very informative, please come up with end to end projects using databricks
Hi, could you please provide the slides and notebooks, that would be really helpful for a quick revisions before interview
Thanks for providing indepth knowledge about these topics. Amazing.
Glad you like them! My pleasure!
Hi Raja Sir, The contents are very good in this video and playlist. But not able to understand the sequence to follow as the numbers are missing in serial numbers you given. Also playlist has 65 videos but the serial numbers are above 100 also, can you pl help with sequencing of videos to go through the playlist.
Thank you for providing such detailed videos.
Glad you like them! Keep watching
A doubt: As you said, ultimately spark converts dataframes into RDDs while processing. Then how the benefits like avoiding GC-process and others will eventually comes into play while using DFs instead of RDDs? I'm fairly new in this area. And thanks for this playlist.
GC is related to on heap memory, not related to dataframe or RDD.
So does it mean dataframes don’t run in heap memory ?
As per your slide for the Differences among the RDD, Dataframe and Dataset- you mentioned the supported language for Dataframe is Java, Scala, Python and R. What about the SQL for these. Could you please clarify on this Raja. If possible.
Hi Ranjan, yes spark SQL is also supported by dataframe api
Very nicely explained the concepts.
Glad you liked it! Thanks
Hi, could you please activate the subtitles for this and other videos? these are really great sources, i don't wanna miss anything.
Hi Abdul, sure will activate the subtitles
@@rajasdataengineering7585 I would also appreciate the subtitles so I don't miss information
Very informative
Glad it was helpful!
Hi Raja, Your videos are very informative and interms of RDD/DataFrame/Dataset if some one which one is faster in execution what would be your answer?
Hi Sandani, good question.
RDD is native api for Spark. So whatever we use dataset or dataframe, it would be internally converted to RDD. But rdd is quite outdated for programming nowadays. Dataframe is widely used across projects due to developer convenience. Would recommend to go with dataframe. Dataset has limitations with programming languages.
For detailed information, please refer this video
ua-cam.com/video/g4T25_4HGM0/v-deo.html
May I know the first video of the series?
Great work. 👍👏👏
Thank you! Cheers!
sir can you please explain what is serialization
Sure, will create a video on this requirement
Could you make a repo for all your videos.. Otherwise it is hard to follow you , thanks a lot Raja
Your content is very good can you provide pdf of ppt
Hi Raja, could you please fix the order of the playlist? thanks in advance
Hi Abdullah, sure I will do it
So pyspark uses dataframe and not dataset right?
Yes dataset is only available in scala and Java while dataframe is available with pyspark, R, scala, SQL
Best One
Thanks!
RDD is not type safety right? they don't enforce datatype; This means that the type of the data in an RDD can change at runtime. This can lead to errors if the data is not properly checked.
Pls check spark official documentation instead of chatgpt to know the truth
dataset also has catalyst optimizations, but in slide it is just "optimization"
Yes dataset and spark SQL also uses catalyst optimizer. Optimization means catalyst optimizer.
In the previous slide, mentioned that dataset consolidates best features from both rdd and dataframe
done
Thank you
You're welcome
amazing
Thank you! Cheers!
Super
Can you please provide sequence number for your vedioes please
Sure Krishna, I will arrange the videos and create perfect playlist. Please allow me sometime for that.
Ru providing real time training raja ji
@@rajasdataengineering7585 sent email
Thanks, will respond asap
sir can you share pdf sir
DataFrames are strong Type safety and RDD are not right. I think you need modify the slide.
No, dataframes are weak type safety, whereas rdd and datasets are strong type safety.
For spark engine, dataframe is collection of rows (not individual columns) so it can't validate the column data type during compile time. So it is not strong type safety. Hope you understand.
Pls refer spark documentation to know more about type safety
Dataframes are mutable .
No, dataframe is immutable
In Pyspark we can do this
Df = Df . Select or any other transformation . Which will change its state ? Or am I considering mutability wrong ? .
Yes you can do df=df.select but it does not mean that dataframe is mutable. What happens internally is previous dataframe is dropped and another new df is created based on lazy evaluation, not the previous df is getting modified.
Dataframe is always immutable
Ok thank you Raja for helping out . Got it .
Raja i am confused between two topics , optimize write and auto compact . I saw you had made video on optimize still confused .
Raja Bro could you please provide your email id i need to learn This couse