These videos are so valuable. I love how we move from an idea - let's look at correlation - to updating the model, to adding new features. Please please please do more kaggle competitions! Thank you so much - subscribed!
Can someone elaborate more on the idea of turning each feature's values to some kind of "embedding" after knowing the fact that the columns seem to be not correlated? What is the logic behind this? And how does that explain the decent improvement we saw? I'm genuinely intrigued
The intuition is that each feature can be broken down into common ingredients. This is what the "embedding" layer does. Since the features are unrelated, this means that each one of them is a different combination of some sort of the common ingredients. But, operating on the basic ingredients comprising the features, results in the NN more efficiently mapping the input space to the output one.
Isn't it an unrealistic approach to solving this real-world problem using this kind of feature engineering? I say, when you need to classify each one of the samples that represent the "specific customer habits of making transactions" in order to predict if they will do a similar one, you won't be able to determine if each feature is unique or not. Am I wrong? What I am not getting?
The official tutorials I think are pretty good! I've learned things mostly from starting with very simple projects (like training a CNN on mnist) and figuring out how to solve things related to it. Like, how do I load my own dataset? What are the best architectures to use? What data augmentation can improve performance (and how do I do this). You get the idea :)
These videos are so valuable. I love how we move from an idea - let's look at correlation - to updating the model, to adding new features. Please please please do more kaggle competitions!
Thank you so much - subscribed!
Thanks for this video; Hoping more videos like these
Really great to hear your thought process on this one!
Thank you so much for sharing your knowledge step by step, your channel is a hidden gem !
Can someone elaborate more on the idea of turning each feature's values to some kind of "embedding" after knowing the fact that the columns seem to be not correlated? What is the logic behind this? And how does that explain the decent improvement we saw? I'm genuinely intrigued
I asked myself the exact same thing!
Would be nice if someone could explain the thoughts about this improvement :)
The intuition is that each feature can be broken down into common ingredients. This is what the "embedding" layer does. Since the features are unrelated, this means that each one of them is a different combination of some sort of the common ingredients. But, operating on the basic ingredients comprising the features, results in the NN more efficiently mapping the input space to the output one.
Very Elegant Solution Bro , Keep Up the good work
Adding uniqueness feature is clever trick especially useful here. Very well done !
Thank you!
you just earned a new sub :D more walkthrough videos please!
Hey, i was just wondering, do Neural networks generally result in a better score than traditional ML models?
STRAIGHT FIRe MY BRO! LOVING THESE VIDS!
👊 👊
Daily videos 😊🔥🔥
Great video, man!
I'm new to pytorch. Could you post on the comments this NN built on tensorflow? That would be awesome.
Fantastic showcase😄.
thanks for sharing
Isn't it an unrealistic approach to solving this real-world problem using this kind of feature engineering? I say, when you need to classify each one of the samples that represent the "specific customer habits of making transactions" in order to predict if they will do a similar one, you won't be able to determine if each feature is unique or not. Am I wrong? What I am not getting?
Great video! What resources did you use to get good at PyTorch?
The official tutorials I think are pretty good! I've learned things mostly from starting with very simple projects (like training a CNN on mnist) and figuring out how to solve things related to it. Like, how do I load my own dataset? What are the best architectures to use? What data augmentation can improve performance (and how do I do this). You get the idea :)
May I ask you a question related to Dynamic U-net (unet + resnet34 as a backbone?)
Karpathy constant 3e-4😂
hahah it's true! :D
thanks
What font are you using?
More kaggle videos please
You got it, working on something right now but I'll get back to kaggles
@@AladdinPersson thank you so much !