Great series. Thanks for this! I like the databricks env, I work with it everyday but mostly I'm concerned with the pyspark commands and your workflow of structuring data, data manipulation and general data engineering methods. PySpark stuff generally.
Great series. Thanks for this! I like the databricks env, I work with it everyday but mostly I'm concerned with the pyspark commands and your workflow of structuring data, data manipulation and general data engineering methods. PySpark stuff generally.
great content, thank you
After downloading how do I upload the dataset to databricks?
When I saw pipeline in the title, I was imagining a pipeline similar to ETL workflows in some tools
The term has multiple meaning, but it's meant to be building the ETL functionality via code rather than using a UI tool or declarative language.
Great, but for some people, the first 15 minutes are really not what they are looking for when they click on your video :)