Very thankful for this video and the entire set of videos. In min 7:08 X_test and y_test must be numpy array, right? If yes, should I use X_test.values and y_test.values or X_test.to_numpy() and y_test.to_numpy() ?? Thanks again!
Thanks for the video! In this case, two of the features are perfectly correlated. What if the correlation is less than |1|? Also, what happens in the case of categorical features? Suppose there is a feature column with multiple categorical features, and we one-hot encode it, does it make sense to sum their feature importances to get the importance of that feature?
Can you do some video about Shapley values for feature importance? Thanks a lot :)
Very thankful for this video and the entire set of videos. In min 7:08 X_test and y_test must be numpy array, right? If yes, should I use X_test.values and y_test.values or X_test.to_numpy() and y_test.to_numpy() ?? Thanks again!
Keep them coming ❤❤❤
I liked it
Thanks for the video! In this case, two of the features are perfectly correlated. What if the correlation is less than |1|? Also, what happens in the case of categorical features? Suppose there is a feature column with multiple categorical features, and we one-hot encode it, does it make sense to sum their feature importances to get the importance of that feature?
Thank you so very much 💙🙏
Great video really useful explanations
Glad you liked it
hi Sebastian Raschka, can you explain LDA with code please?
Coincidentally, I wrote about it here a few years back: sebastianraschka.com/Articles/2014_python_lda.html
@@SebastianRaschka thank you.
Very useful, thank you!
Thanks for the video
Glad you liked it!