Amazing Lecture. Thank you very much. I am posting my notes with timestamps to help guide other learners, I hope it's okay. *Dimensional Data Modeling Overview* *Key Concepts* -*Complex Data Types:* - *Struct:* Similar to a table within a table, used for organizing related data. - *Array:* A list in a column, useful for compact data representation . 00:01 - 00:32 - *Dimensions:* Attributes of an entity, such as a person's birthday or favorite food. They can be categorized as: - *Identifier Dimensions:* Uniquely identify an entity (e.g., user ID, device ID) . 01:58 - 02:29 - *Attributes:* Provide additional information but are not critical for identification. They can be: - *Slowly Changing Dimensions:* Attributes that change over time (e.g., favorite food) . 02:54 - 03:26 - *Fixed Dimensions:* Attributes that do not change (e.g., birthday) . 03:53 - 04:24 *Data Modeling Types* - *OLTP (Online Transaction Processing):* Focuses on transaction-oriented applications, emphasizing data normalization and minimizing duplication . 12:26 - 12:59 - *OLAP (Online Analytical Processing):* Optimized for query performance, allowing for fast data retrieval without extensive joins . 12:55 -13:26 - *Master Data:* Serves as a middle ground between OLTP and OLAP, providing a complete and normalized view of data for analytical purposes . 14:23 - 14:55 *Cumulative Table Design* - Cumulative tables maintain a complete history of dimensions, allowing for the tracking of changes over time. They are created by performing a full outer join between today's and yesterday's data tables . 21:04 - 21:36 *Trade-offs in Data Modeling* - *Compactness vs. Usability:* Using complex data types can lead to more compact datasets but may complicate querying . 00:30 - 01:00 - *Empathy in Data Modeling:* Understanding the needs of data consumers (analysts, engineers, customers) is crucial for effective data modeling . 07:13 - 07:44 *Important Considerations* - *Temporal Cardinality Explosion:* A phenomenon that can occur when modeling dimensions with time-dependent attributes . 06:16 - 06:46 - *Run Length Encoding:* A powerful method for data compression, particularly in big data contexts . 06:44 - 07:17 *Cumulative Table Design* - *Full Outer Join:* This technique is used to merge data from two different time periods (e.g., yesterday and today) to capture all records, even if they exist in only one of the datasets. This allows for a comprehensive view of user activity over time . 22:01 - 22:32 - *Historical Data Tracking:* The cumulative table design is essential for maintaining historical user activity data. For instance, Facebook utilized a table called "Dim All Users" to track user activity daily, which helped in analyzing user engagement metrics . 22:30 - 23:02 - *State Transition Tracking:* This involves categorizing user activity states (e.g., churned, resurrected, new) based on their activity from one day to the next. This method allows for detailed analysis of user behavior transitions . 23:00 - 23:30 - *Cumulative Metrics:* By holding onto historical data, analysts can compute various metrics, such as the duration since a user was last active. This can be done by incrementing a counter for inactive days . 27:14 - 27:45 - *Data Pruning:* To manage the size of the cumulative table, it is important to remove inactive users after a certain period (e.g., 180 days of inactivity) to maintain efficiency . 26:17 - 26:47 - *Cumulative Table Design Process:* The design involves using two data frames (yesterday's and today's data) to build a comprehensive view. The process includes performing a full outer join, coalescing user IDs, and computing cumulative metrics . 26:45 - 27:17 *Strengths and Drawbacks of Cumulative Table Design* - *Strengths:* - Enables historical analysis without the need for complex group by operations, as all data is stored in a single row . 28:39 - 29:10 - Facilitates scalable queries on historical data, which can often be slow when querying daily data . 29:36 - 30:07 - *Drawbacks:* - Backfilling data can only be done sequentially, which may slow down the process compared to parallel backfilling of daily data . 30:05 - 30:36 - Managing personally identifiable information (PII) can become complex, requiring additional filtering to remove inactive or deleted users . 30:32 - 31:05 *Compactness vs. Usability Trade-off* - *Usability:* Usable tables are straightforward and easy to query, often favored by analysts . 31:02 - 31:33 - *Compactness:* Compact tables minimize data storage but can be difficult to work with analytically. They often require decompression and decoding . 32:59 - 33:31 - *Middle Ground:* Using complex data types like arrays and structs can provide a balance between usability and compactness, allowing for efficient data modeling . 33:29 - 33:60 *Data Structures* - *Structs:* These are like tables within tables, allowing for different data types for keys and values . 34:27 - 34:58 - *Maps:* Maps require all values to be of the same type, which can lead to casting issues . 34:56 - 35:27 - *Arrays:* Arrays are suitable for ordered datasets, and they can contain structs or maps as elements . 35:52 - 36:23 *Run Length Encoding:* This technique compresses data by storing the value and the count of consecutive duplicates, which is particularly useful for temporal data . 39:10 - 39:41 *Data Sorting and Joins:* Maintaining the order of data during joins is crucial for effective compression. If sorting is disrupted, it can lead to larger data sets than expected . 41:05 - 41:37 Using arrays can help preserve sorting during joins . 41:35 - 42:05
man idk what to say, im living paycheck to paycheck here in seattle and cant afford to drop money on any huge bootcamps or courses. I want to become a data engineer so I wont have to worry about what me or my dog will have to eat everyday. Im not getting into the role simply for the money, I genuinely do have a passion for this specific field. I am going try and kill this course, I will come back to this comment in the future! Thanks Zach.
I’m restarting my Data Engineering journey from the ground up, rebuilding my skills from scratch-and here comes this incredible free UA-cam bootcamp! It feels like the universe is aligning perfectly. As a working professional, I can relate so much to this content. I never truly thought about data modeling this deeply while working on tables for streaming data. But through this bootcamp, I’ve learned so much that directly connects with the pro tables in my organization. It’s been a real eye-opener! A huge thank you to Zach and the team for putting this together. Starting two days late, but super excited to catch up and dive in! 🚀
Day 2! Today has been incredibly informative. I've learned a lot of new things, or at least gotten a solid introduction to some concepts. Not everything is crystal clear yet, but I'm excited to dive deeper into future videos and gain a better understanding. Looking forward to continuing this journey! As I mentioned before I will try to comment on all the bootcamp video to stay focused.
This is great stuff, started working as a junior etl dev earlier this year, looking forward to growing with this series. Thank you, Zach and rest of team, for making this available!
Great lecture and sharing of experience. I am loving this. I am in China on a business trip and had issues making my VPN to work. I am just starting the lecture today. I am going to binge watching all the 9 videos in the next two days. It is sooo cool. Thank you, Thank you
Great video, Zach! You explained cumulative table modeling so clearly and made it easy to understand. The examples were really helpful. Thanks for sharing this
Thanks Zach for sharing real world knowledge and experience. Your videos are full of practical use cases and I am sure a lot of folks are going to be benefitted from your boot camp including me. I am just the beginner in this field, so I will complete the free boot camp now to develop overall understanding of DE world and then would join the paid boot camp to begin my journey and gain sound skills and knowledge in short span of time. Really appreciate your efforts and knowledge sharing. God Bless you bro!!
This was such an educational and informational video. Thanks, Zach. :) I am in very bad financial situation too. I am going to squeeze myself and as much time as I can to watch all of your videos. I know I can do this.
FYI Master Data in data modeling is also a term used for a persistent entity, as opposed to transactional data (events or things that happen involving persistent entities) or control data (controls application behavior). This is what Master Data Management is talking about. So you might have master data for employees and customers, and then transaction data for pay increases or purchases.
Amazing work brother you are giving a very handful of experience and awesome inside and intelligence about data. Thank you for teaching us and providing such valuable information.
For sometime now I was planning to move to data engineering from application and infra side, but was not able to structure my learning, this will help me alot,Thanks Zach
Cool! it turns out that you come to some things gradually. I realized that the method I started using to store products in a table is a "cumulative table design"
Part of delta is whether or not history is business relevant. For example, an organizational structure development (what departments report to other departments). Business may or may not care about being able to see the data as it was - historically. If they didn't, you would model it as type 1 IMHO.
Business requirements and needs dictate modeling the most. Totally agree. Historical preservation is a very important business need though for many many use cases
Hopefully slides will be updated in the "slides" tab of the lessons. Currently I don't see them. It would be really helpful to get the slides for future reference
Can someone explain what Zack means when he is saying Backfilling data can only be done sequentially, which may slow down the process compared to parallel backfilling of daily data? 30:05 - 30:36
In the cumulative data table example discussed you need the previous day's data to exist in the table so that whatever aggregates or calculation that need to be stored can be run for example number of active users today vs tomorrow and so on. This can only happen if yesterday's data is available in original table. The daily load would be for data that has no such historical dependencies and can be run in one go parallelly.
Hey Zach, great video. For RLE, you mentioned sorting after the join. How do you think about the trade off in compute vs storage in that case? Compute for re-sorting vs savings from RLE when storing
Thanks for this detailed information. Quick question, For the cumulative table, how do you expect to scale if a larger user base queries it? Don't you think this table will eventually be a bottleneck for the application? How do you suggest overcoming this?
Cumulative tables are meant for analytical data, not application data. They're meant for analysts and very low queries per second, not applications and very high queries per second. If you want to bring the cumulative table BACK into the production application, you MUST index on user_id for fast lookups and you MUST always query based on user_id so you never bring in the whole dataset
Hey Zach, I have a little confusion here. When we are talking about temporal dimensions you have mentioned that joining that dataset with other downstream tables in Spark will mess the sorting, but during shuffle join operation in Spark all the data pertaining to one join key will collected in one partition correct ? So, when we save it as a parquet, I think it will not effect the run length encoding. Please help me understand.
@x__nisha.s__x In data engineering, shuffling refers to the process of redistributing data across the nodes in a distributed computing system, such as Apache Spark or Hadoop. It plays a crucial role in operations that require data from multiple partitions or nodes to be grouped, aggregated, or joined. Here’s an easy explanation: In a distributed system, shuffling is like this regrouping process. The data, initially distributed randomly or arbitrarily across the nodes, needs to be reorganized based on specific keys or attributes (e.g., all rows with the same ID, category, or timestamp).
@EcZachly_ could you please provide the slides that you used in the video. That would be helpful. I know there is a slide section on your platform, if the slides would be available there then it's fine. Amazing work man.
Amazing Lecture. Thank you very much. I am posting my notes with timestamps to help guide other learners, I hope it's okay.
*Dimensional Data Modeling Overview*
*Key Concepts*
-*Complex Data Types:*
- *Struct:* Similar to a table within a table, used for organizing related data.
- *Array:* A list in a column, useful for compact data representation . 00:01 - 00:32
- *Dimensions:* Attributes of an entity, such as a person's birthday or favorite food. They can be categorized as:
- *Identifier Dimensions:* Uniquely identify an entity (e.g., user ID, device ID) . 01:58 - 02:29
- *Attributes:* Provide additional information but are not critical for identification. They can be:
- *Slowly Changing Dimensions:* Attributes that change over time (e.g., favorite food) . 02:54 - 03:26
- *Fixed Dimensions:* Attributes that do not change (e.g., birthday) . 03:53 - 04:24
*Data Modeling Types*
- *OLTP (Online Transaction Processing):* Focuses on transaction-oriented applications, emphasizing data normalization and minimizing duplication . 12:26 - 12:59
- *OLAP (Online Analytical Processing):* Optimized for query performance, allowing for fast data retrieval without extensive joins . 12:55 -13:26
- *Master Data:* Serves as a middle ground between OLTP and OLAP, providing a complete and normalized view of data for analytical purposes . 14:23 - 14:55
*Cumulative Table Design*
- Cumulative tables maintain a complete history of dimensions, allowing for the tracking of changes over time. They are created by performing a full outer join between today's and yesterday's data tables . 21:04 - 21:36
*Trade-offs in Data Modeling*
- *Compactness vs. Usability:* Using complex data types can lead to more compact datasets but may complicate querying . 00:30 - 01:00
- *Empathy in Data Modeling:* Understanding the needs of data consumers (analysts, engineers, customers) is crucial for effective data modeling . 07:13 - 07:44
*Important Considerations*
- *Temporal Cardinality Explosion:* A phenomenon that can occur when modeling dimensions with time-dependent attributes . 06:16 - 06:46
- *Run Length Encoding:* A powerful method for data compression, particularly in big data contexts . 06:44 - 07:17
*Cumulative Table Design*
- *Full Outer Join:* This technique is used to merge data from two different time periods (e.g., yesterday and today) to capture all records, even if they exist in only one of the datasets. This allows for a comprehensive view of user activity over time . 22:01 - 22:32
- *Historical Data Tracking:* The cumulative table design is essential for maintaining historical user activity data. For instance, Facebook utilized a table called "Dim All Users" to track user activity daily, which helped in analyzing user engagement metrics . 22:30 - 23:02
- *State Transition Tracking:* This involves categorizing user activity states (e.g., churned, resurrected, new) based on their activity from one day to the next. This method allows for detailed analysis of user behavior transitions . 23:00 - 23:30
- *Cumulative Metrics:* By holding onto historical data, analysts can compute various metrics, such as the duration since a user was last active. This can be done by incrementing a counter for inactive days . 27:14 - 27:45
- *Data Pruning:* To manage the size of the cumulative table, it is important to remove inactive users after a certain period (e.g., 180 days of inactivity) to maintain efficiency . 26:17 - 26:47
- *Cumulative Table Design Process:* The design involves using two data frames (yesterday's and today's data) to build a comprehensive view. The process includes performing a full outer join, coalescing user IDs, and computing cumulative metrics . 26:45 - 27:17
*Strengths and Drawbacks of Cumulative Table Design*
- *Strengths:*
- Enables historical analysis without the need for complex group by operations, as all data is stored in a single row . 28:39 - 29:10
- Facilitates scalable queries on historical data, which can often be slow when querying daily data . 29:36 - 30:07
- *Drawbacks:*
- Backfilling data can only be done sequentially, which may slow down the process compared to parallel backfilling of daily data . 30:05 - 30:36
- Managing personally identifiable information (PII) can become complex, requiring additional filtering to remove inactive or deleted users . 30:32 - 31:05
*Compactness vs. Usability Trade-off*
- *Usability:* Usable tables are straightforward and easy to query, often favored by analysts . 31:02 - 31:33
- *Compactness:* Compact tables minimize data storage but can be difficult to work with analytically. They often require decompression and decoding . 32:59 - 33:31
- *Middle Ground:* Using complex data types like arrays and structs can provide a balance between usability and compactness, allowing for efficient data modeling . 33:29 - 33:60
*Data Structures*
- *Structs:* These are like tables within tables, allowing for different data types for keys and values . 34:27 - 34:58
- *Maps:* Maps require all values to be of the same type, which can lead to casting issues . 34:56 - 35:27
- *Arrays:* Arrays are suitable for ordered datasets, and they can contain structs or maps as elements . 35:52 - 36:23
*Run Length Encoding:* This technique compresses data by storing the value and the count of consecutive duplicates, which is particularly useful for temporal data . 39:10 - 39:41
*Data Sorting and Joins:* Maintaining the order of data during joins is crucial for effective compression. If sorting is disrupted, it can lead to larger data sets than expected . 41:05 - 41:37 Using arrays can help preserve sorting during joins . 41:35 - 42:05
Awesome work..
👏🏾👏🏾👏🏾
Beautiful ❤ thank you
Thanks much for sharing
Assignment questions? Where it is?
man idk what to say, im living paycheck to paycheck here in seattle and cant afford to drop money on any huge bootcamps or courses. I want to become a data engineer so I wont have to worry about what me or my dog will have to eat everyday. Im not getting into the role simply for the money, I genuinely do have a passion for this specific field. I am going try and kill this course, I will come back to this comment in the future! Thanks Zach.
Wish you the very best :)
Wish you the best man stay strong
I’m restarting my Data Engineering journey from the ground up, rebuilding my skills from scratch-and here comes this incredible free UA-cam bootcamp! It feels like the universe is aligning perfectly.
As a working professional, I can relate so much to this content. I never truly thought about data modeling this deeply while working on tables for streaming data. But through this bootcamp, I’ve learned so much that directly connects with the pro tables in my organization. It’s been a real eye-opener!
A huge thank you to Zach and the team for putting this together. Starting two days late, but super excited to catch up and dive in! 🚀
Amazing work. Can’t imagine how difficult this is to cater to 1000s of students. Will be going through your entire course.
That’s what called experience, u r clearly matching whole dots end to end understanding .. appreciate your work 🤘🏻🤘🏻
Day 2!
Today has been incredibly informative. I've learned a lot of new things, or at least gotten a solid introduction to some concepts. Not everything is crystal clear yet, but I'm excited to dive deeper into future videos and gain a better understanding. Looking forward to continuing this journey! As I mentioned before I will try to comment on all the bootcamp video to stay focused.
39 yrs old … want to change my career. Just came across your videos. This is what I’ll be doing during the holidays .
This is great stuff, started working as a junior etl dev earlier this year, looking forward to growing with this series. Thank you, Zach and rest of team, for making this available!
is there any openings in your company bro?
Great lecture and sharing of experience. I am loving this. I am in China on a business trip and had issues making my VPN to work. I am just starting the lecture today. I am going to binge watching all the 9 videos in the next two days. It is sooo cool. Thank you, Thank you
Great video, Zach! You explained cumulative table modeling so clearly and made it easy to understand. The examples were really helpful. Thanks for sharing this
Thanks Zach for sharing real world knowledge and experience. Your videos are full of practical use cases and I am sure a lot of folks are going to be benefitted from your boot camp including me. I am just the beginner in this field, so I will complete the free boot camp now to develop overall understanding of DE world and then would join the paid boot camp to begin my journey and gain sound skills and knowledge in short span of time. Really appreciate your efforts and knowledge sharing. God Bless you bro!!
This was such an educational and informational video. Thanks, Zach. :) I am in very bad financial situation too. I am going to squeeze myself and as much time as I can to watch all of your videos. I know I can do this.
Thank you for this initiative, I already learned things on this first lecture ! I didn’t expect to see that much quality in a Free bootcamp, well done
This bootcamp has come at the PERFECT time for me. Thank you Zach 🎉
This is incredible! I couldn't help but notice how often the phrase 'when I was working with Netflix' came up.
Amazing Lecture Zach! Looking forward to great learnings ahead! Thanks!
I’ll keep saying . You inspire me alot. The passion you hold would make anyone want to be a data engineer😊
I am thrilled to watch your free bootcamp as your paid bootcamps are out of my budget!
FYI Master Data in data modeling is also a term used for a persistent entity, as opposed to transactional data (events or things that happen involving persistent entities) or control data (controls application behavior). This is what Master Data Management is talking about. So you might have master data for employees and customers, and then transaction data for pay increases or purchases.
I just resigned for my current job serving notice period and have an offer but not a great one.
This comes at a perfect time for me. Thanks man.
Thanks Zach, for starting the bootcamp infact this came at the right time. I have enrolled in and looking forward to completing this bootcamp.
Great lecture, thank you very much for this FREE bootcamp
Awesome presentation. Thank you for all the work you put into this
Amazing work brother you are giving a very handful of experience and awesome inside and intelligence about data. Thank you for teaching us and providing such valuable information.
For sometime now I was planning to move to data engineering from application and infra side, but was not able to structure my learning, this will help me alot,Thanks Zach
Day 1 Complete . Thanks Zach
Glad to be here, thank you for doing this zach☺
Thank you for the free knowledge.. joining the training
Thanks a lot for this. Appreciate your work Zach.
Thank you, great work Zach!! 🚀
absoultely loved it ♥ keep up the good work
Massively appreciated thanks Zach.
Awesome. Run length encoding insight is good
Great lecture, learned a lot
Thank you for this amazing opportunity!
Zach this is awesome! Looking forward for your next bootcamp in 2025 !
this is perfect! Thanks Zach
Thank you for the opportunity. Appreciate you!
Love ❤️ from India..
You are my role model 😊
Super excited for this 😍
Awesome session !
Cool!
it turns out that you come to some things gradually.
I realized that the method I started using to store products in a table is a "cumulative table design"
Thanks a lot buddy , we really appreciate it .
Thank you very much for the course!
Amazing stuff @Zach
I'm eager to follow you in this journey.
Long live the king!
Part of delta is whether or not history is business relevant. For example, an organizational structure development (what departments report to other departments). Business may or may not care about being able to see the data as it was - historically. If they didn't, you would model it as type 1 IMHO.
Business requirements and needs dictate modeling the most. Totally agree. Historical preservation is a very important business need though for many many use cases
Thank you for the lecture.
Zack, You are hero ❤
Thanks a lot this is super useful.
greate explanations thank you :)
Hopefully slides will be updated in the "slides" tab of the lessons. Currently I don't see them. It would be really helpful to get the slides for future reference
Thank You Zach!
Thanks Zach!
Can someone explain what Zack means when he is saying Backfilling data can only be done sequentially, which may slow down the process compared to parallel backfilling of daily data? 30:05 - 30:36
In the cumulative data table example discussed you need the previous day's data to exist in the table so that whatever aggregates or calculation that need to be stored can be run for example number of active users today vs tomorrow and so on. This can only happen if yesterday's data is available in original table.
The daily load would be for data that has no such historical dependencies and can be run in one go parallelly.
Thank you Zach!
Starting boot camp today
thanks much zach
Thanks guys!
"i like to suffer while i eat my food"
"Cool, lets learn data from you"
36:07, "temporal cardinality explosions of dimensions" the hell does that mean bro
Thank you so much for this
Hey Zach, great video. For RLE, you mentioned sorting after the join. How do you think about the trade off in compute vs storage in that case? Compute for re-sorting vs savings from RLE when storing
Really depends on how many times downstream the data set is used. If it's used more than 3 times, sorting is probably worth it
I want to come back here as a Data Engineer
Way to go!
Brilliant!
You think I can learn basic phyton and SQL at the same time I do the bootcamp?
Very tough but potentially doable
We should build models for different consumers? Or we should prefer one model for all consumers?
Thanks for this detailed information. Quick question, For the cumulative table, how do you expect to scale if a larger user base queries it? Don't you think this table will eventually be a bottleneck for the application? How do you suggest overcoming this?
Cumulative tables are meant for analytical data, not application data. They're meant for analysts and very low queries per second, not applications and very high queries per second.
If you want to bring the cumulative table BACK into the production application, you MUST index on user_id for fast lookups and you MUST always query based on user_id so you never bring in the whole dataset
Hi @Zach..when we query any app on the front end as a customer, do we query the master table or the transactional relational tables?
You should query the production data in your RDBMS if it's serving customers.
Hey zack my sql and python skills are not the best ? what are the minimum topics i should in both sql and python to get started with this journey
Hi, Where we can get slides. I saw in the GH course material I could not find maybe I need to look again.
Hey Zach, I have a little confusion here. When we are talking about temporal dimensions you have mentioned that joining that dataset with other downstream tables in Spark will mess the sorting, but during shuffle join operation in Spark all the data pertaining to one join key will collected in one partition correct ? So, when we save it as a parquet, I think it will not effect the run length encoding. Please help me understand.
Join the discord
comment to support the Boot Camp Series 👍👍👍👍👍👍
Thanks man
Thanks for the bootcamp @Zach. Could you please add sequence to the videos. This would be useful going ahead as well. Thanks!
Data engineering boot camp playlist is on my page already
Loved the first lecture. Curious where I can find the link to the discord server that you mentioned in your intro video
Bootcamp.TechCreator.io join here you’ll get emailed the discord link
Thanks for the video, may I know how to access the lab as the website doesn’t show me anything apart from assignments
It’s in the GitHub repo
When you say yesterday, does it mean data till yesterday or just yesterday’s data
Data until yesterday
this isn't on the Dashboard
Working on it man. I’m trying my best to
@@EcZachly_ thanks bro
its there now. I think at 5 sharp its on YT, and it takes sometime to come on the dashboard, probably happened just today unsure
@@dnyanaisurkutwar7619 Haven't ever done a 20,000 person free boot camp before. We missed one deadline by 38 minutes
hello , there is spark, kafka and all but where their tutos are ? :(
Thanks @zach
I am a little late to the party but I am wondering if I can get started and catch up?
Of course!
Could someone explain in an easy way what shuffling is and how/why it’s used in the context of data engineering please?
@x__nisha.s__x In data engineering, shuffling refers to the process of redistributing data across the nodes in a distributed computing system, such as Apache Spark or Hadoop. It plays a crucial role in operations that require data from multiple partitions or nodes to be grouped, aggregated, or joined.
Here’s an easy explanation:
In a distributed system, shuffling is like this regrouping process. The data, initially distributed randomly or arbitrarily across the nodes, needs to be reorganized based on specific keys or attributes (e.g., all rows with the same ID, category, or timestamp).
@ thanks ☺️
Great content. But what happens when a complex data type like array or struct reaches its limit size?
That limit is 65,000 elements. If you have one item per day, that’s 130 years of data.
Thanks 🙏
is this for a complete beginner?
What do you think?
I'm a completely new to data engineering
Hi Zach can you share the discord link ?
Bootcamp.TechCreator.io if you sign up here. You will be emailed it
@ got it and joined :)
Would future lessons dive deeper?
The lab gives you an applied example of this
Everyone talking about great, but all things coming out from my head.
Any suggestions?
Probably missing prerequisites
What is the link for the discord so I can ask questions?
Join the boot camp at bootcamp.techcreator.io when you register, you’ll get a discord link emailed to you
@@EcZachly_ Great thank you!
Reusing old boot camps contents
So who is here from Nigeria??? Let's learn together please
Join the boot camp and discord here!
bootcamp.techcreator.io
@EcZachly_ could you please provide the slides that you used in the video. That would be helpful. I know there is a slide section on your platform, if the slides would be available there then it's fine. Amazing work man.
Thanks Zach!!
thanks Zach!!
Thanks 🙏