Just completed all 37 Lectures :) This is the only course that forced me to come back, and complete entire series. It's only because of you Great Sir! Thank you so much for sharing these !
I started learning ML in 2017 when I was an undergrad student and now I am a graduate student. I took many courses and read many books but these lectures cleared many tiny details and concept which I was missing. Spend my COVID-19 summer watching the whole series. Thank Killian!
This was hands down the best lecture series I have seen in my life. I watched at least one video over the past three weeks, wrote notes along the way, and even tried the homework problems. Wow, what a ride. Thanks, Professor Weinberger!
This class was so amazing and I learnt so many useful concepts. What I loved most was Dr.Weinberger's engaging and intuitive delivery which made the complex concepts so easy to grasp. He is also funny as hell, which made the classes a lot of fun. A big thank you from my side to Dr.Weinberger for sharing these wonderful lectures as well as the assignments.
It took me 4 months but I've finally completed watching your series of lectures! You made it extremely informative, intuitive and fun and you have a great teaching style :) Thank you!
This is a wonderful machine learning course. I watched several machine learning/deep learning related courses on UA-cam. This is my favorite one. In my opinion, a good teacher generally has one of the 2 traits: 1. Make the learning process easier for students by giving Illuminating lectures. 2. Want the students to learn from the heart and motivate students to learn by displaying his/her own passion about the subject. Professor Kilian has both traits. This makes me really enjoy watching this course. Thank you Kilian!
Thank you for the delightful class, Kilian! With ML making significant strides over the last few months, I was looking for a course that thoroughly and sufficiently explained the foundations behind it. This was it. Dutifully recommended you to all of my friends who are interested in the subject.
Dear Prof. Weinberger, It's a privilege to be able to listen to the whole series, from the very beginning to the very end. It really helped me getting through some parts that I was not very sure about. Thank you very much!
Hello Dr. Weinberger. Your videos are hands down the best I've ever seen in terms of setting up intuition and explaining the concepts in the easiest way possible. This has helped me immensely in my studies. Thank you so much!!
Thank you Kilian, your lectures has bought in a completely new perception/understanding(which was missing earlier) on how machine learning algorithms work. Your lectures also made me to appreciate Machine Learning even more. Thank you is a small word. May you always be blessed with good health and happines.
This is such s wonderful course. I have come across so many machine learning courses, blogs, videos but this was the best I came across. I sort of binged watched in during quarantine, playing back the lecture to note down so many things you explained. Thanks a lot Professor Killian!
Today, I am going to complete all the lectures!!! This is a legendary course that should have a similar number of views as Dr.Gilbert strang's linear algebra. Thank you so much, Dr. Kilian!!!
I hope you are safe and sound!! Just wanted to say Thank you for the amazing lecture series. I have tear in my eyes... Professor Killian..you're the best!! I hope you add more videos related to Machine learning and Deep learning in the future.
Just Loved the whole lecture series :) It's so hard to find a series of lectures on youtube which motivates you to go back and go through the whole thing, but your lectures I succeeded in watching every one of them and also doing the homeworks :)) Thank you for the resource and love your sense of humor LOLLL
I honestly can't thank you enough for this series. Thank you so much Kilian. Just wanted to confirm that this translational invariance is due to the combination of Conv layers as well as a pooling layer right. Cause Conv layers by themselves are translational equivarient. With the presence of a Pooling layer after them, we can achieve translational invariance for certain section of the image (If the image is taken to an opposite corner, the final rep. fed to the FC layers will be different right??), since the output even slight changes in position would lead to a slight change in the output of the conv layer, but maxing or avging in the region would give us the same output, at least for small shifts. Hence, we won't we require a lot of data ( faces in every position) to generalize. Am I right here???
I you have many layers then the receptive field (i.e. the pixels it is influenced by) of each neuron in the last layer is huge and translation invariance becomes less of an issue. So yes you are right, but creating many layers really helps in that respect.
I came here only to learn about gaussian processes. I ended up watching ~10 hours, as if this was a TV series. Even watched lectures on things I already knew well, but just wanted your perspective. Best course really. Thank you
I just wanna say Thank you very much. You are really the best teacher for this stuff. i can't thank you enough. And please make new courses even if they are not free, i think a lot of people would like to pay for your courses
Amazing and inspiring course. Thank you so much Professor Kilian. Your ML course was the first that I watched complete .All the 37 lectures helped me so much. And when I read new ML material, very often remember the content that I watched in your course (More frequently Gaussian Distribution, Bagging and Boosting :)) thank you so much!
Onto the last one now! but yeah, feels sad as this course comes to end. Quite interesting, informative, and highly engaging :) All thanks to our amazing professor! Please share a few more course lectures at Cornell! We'd love to level up ourselves...
Thank you very much. These lectures were great. Could you please publish the lectures for other classes like the one you mentioned about, called, "Machine Learning for Data Science" as well?
Other classes were not taught by him. I am not aware of any lecture recordings, but you might find some of the assignments and slides here: www.cs.cornell.edu/courses/cs4786/2019sp/index.htm
Thank you so much for putting these lectures online. I have enjoyed them all massively. I came across them while reading about decision trees, watched all of them and over the last 2 weeks have sat in my office every night and made my way through the whole course. Everyone I learnt alot and I now feel I have a way better understanding of ML to ground the rest of my learning (after I go and spend some time making up for my absence to my wife and kid :D) Would be great if you had a link to some site where I could buy you a drink, I feel like I'm in debt :)
2:58 Current research on Deep Learning 5:10 We lose information when working on images when we use a regular fully connected network. Images are translationally invariant 9:30 Convolutional layer explanation 13:30 We are restricting network to only learn functions functions that are translation invariant 16:50 Research on CovNets - Nvidia presentation 21:40 Residual networks. Skip networks. Stochastic depth 26:55 Impotent layers. Robustness because no layer is too important 28:25 Dense connectivity - DenseNet 30:30 Image Manifold - Images lie on a sub-manifold - Add/remove beards to faces 43:25 Dropout is used less these days and BatchNormalization is more common 44:20 Demo - Machine Learning for Data Science - Learn to discover structure in data - Manifolds
Hi professor Killian, Once again thank you very much for the great material. I have a quick question regarding to the NN in general. I apologize in advance if I miss this part in one of the lectures (or comments). Is feature selection necessary before any nn (or deep learning) algorithm? One would think that since it is built to solve this central problem in representation as well as the weights, it should be automatically handled...
If you have enough data (and you normalize your features) the neural net can learn if some features are irrelevant. However, you can make its life easier (and get away with less training data) if you identify useless features before you do learning. Put it this way: Anything that the network doesn't have to learn itself makes its life easier.
I am trying to play with this idea but at 35:29 I don't understand how this image is represented, what is the coordinate system? Is it like axes represent weights and biases and for each one you have an entry such as w1*x1 etc. ? At 36:46 why is it meaningful to take gradient descent to reconstruct this image? If we have w1*x1 do you take gradient descent with the respect to x1?
Is it fair to say that the idea of stochastic depth is similar to the randomization of dimensions we do before each greedy search in a random forest? Great lectures btw!
Is the last layer of a deep network still considered a linear classifier even if it has a non-linear activation function? If not, does that assumption still hold?
Yes. Assuming you fix the previous layers, and treat them as feature extractors, then the last (linear) layer is essentially very similar to e.g. logistic regression. Note that logistic regression also has a (non-linear) sigmoid as output s(w'x). The key is that the function s() here acts as a thresholding / scaling function, that essentially makes sure we have output probabilities. Because it is strictly monotonic, it preserves the linearity of the decision boundary. If s() was a sin() function instead of a sigmoid, the classifier would not be linear. Hope this helps.
Just completed all 37 Lectures :) This is the only course that forced me to come back, and complete entire series. It's only because of you Great Sir! Thank you so much for sharing these !
This was wonderfull!! It's strange that not more people are watching this. Thank you so much for sharing!
Exactly. One of most approcahble and intutive lectures on ML there is.
I started learning ML in 2017 when I was an undergrad student and now I am a graduate student. I took many courses and read many books but these lectures cleared many tiny details and concept which I was missing. Spend my COVID-19 summer watching the whole series. Thank Killian!
This was hands down the best lecture series I have seen in my life. I watched at least one video over the past three weeks, wrote notes along the way, and even tried the homework problems.
Wow, what a ride. Thanks, Professor Weinberger!
Where can we find the homeworks? Thanks
@@tubakias1 Here's the link! www.dropbox.com/s/tbxnjzk5w67u0sp/Homeworks.zip?dl=0
Thank you, professor Kilian! What a great teacher you are! I learned a lot and laughed a lot. Awesome!
It took me two months to complete this course, and my knowledge level has drastically changed! Thank you so much!
Great job!
@@kilianweinberger698 Sir, I can't believe you replied, I am a high school senior and have applied to Cornell! - Really hope to meet you one day!
@@satviktripathi6601 did you get into Cornell
All I can say it is the Holy Grail of Machine Learning lectures. Thank you, Professor Kilian.
This class was so amazing and I learnt so many useful concepts. What I loved most was Dr.Weinberger's engaging and intuitive delivery which made the complex concepts so easy to grasp. He is also funny as hell, which made the classes a lot of fun. A big thank you from my side to Dr.Weinberger for sharing these wonderful lectures as well as the assignments.
It took me 4 months but I've finally completed watching your series of lectures! You made it extremely informative, intuitive and fun and you have a great teaching style :) Thank you!
This is a wonderful machine learning course. I watched several machine learning/deep learning related courses on UA-cam. This is my favorite one. In my opinion, a good teacher generally has one of the 2 traits: 1. Make the learning process easier for students by giving Illuminating lectures. 2. Want the students to learn from the heart and motivate students to learn by displaying his/her own passion about the subject. Professor Kilian has both traits. This makes me really enjoy watching this course.
Thank you Kilian!
Thank you for the delightful class, Kilian! With ML making significant strides over the last few months, I was looking for a course that thoroughly and sufficiently explained the foundations behind it. This was it. Dutifully recommended you to all of my friends who are interested in the subject.
Dear Prof. Weinberger,
It's a privilege to be able to listen to the whole series, from the very beginning to the very end. It really helped me getting through some parts that I was not very sure about. Thank you very much!
Thank you, professor Kilian! Thank you for these amazing lectures. Finally finished the whole series, and I feel like this is just the beginning.
your humor made these lectures very enjoyable
Hello Dr. Weinberger. Your videos are hands down the best I've ever seen in terms of setting up intuition and explaining the concepts in the easiest way possible. This has helped me immensely in my studies. Thank you so much!!
Thank you Kilian, your lectures has bought in a completely new perception/understanding(which was missing earlier) on how machine learning algorithms work. Your lectures also made me to appreciate Machine Learning even more. Thank you is a small word. May you always be blessed with good health and happines.
This is such s wonderful course. I have come across so many machine learning courses, blogs, videos but this was the best I came across.
I sort of binged watched in during quarantine, playing back the lecture to note down so many things you explained.
Thanks a lot Professor Killian!
Very Illuminating lectures. This series should have been popular as Andrew.NG's classic one. Thank you, professor Kilian.
I lost it at the cinnamon roll part. Thanks for posting these! They have been very helpful for studying
I just love this course, everything is both intuitive and mathematically deep. Loved the course so much that I finished everything in 21 days.
Just finished the series. It was great, I'm kinda sad now! Thanks professor Weinberger. I wish I had you as a prof in college!
Thanks a lot for brilliant lectures, prof. Kilian. It was Awesome fun and extremely insightful !!
Completed all the lectures and absolutely loved them! Professor, you are really inspiring. Thank you so much for sharing these here.
Today, I am going to complete all the lectures!!! This is a legendary course that should have a similar number of views as Dr.Gilbert strang's linear algebra. Thank you so much, Dr. Kilian!!!
Well done!!
I hope you are safe and sound!! Just wanted to say Thank you for the amazing lecture series. I have tear in my eyes... Professor Killian..you're the best!! I hope you add more videos related to Machine learning and Deep learning in the future.
Just Loved the whole lecture series :) It's so hard to find a series of lectures on youtube which motivates you to go back and go through the whole thing, but your lectures I succeeded in watching every one of them and also doing the homeworks :)) Thank you for the resource and love your sense of humor LOLLL
I honestly can't thank you enough for this series. Thank you so much Kilian.
Just wanted to confirm that this translational invariance is due to the combination of Conv layers as well as a pooling layer right. Cause Conv layers by themselves are translational equivarient. With the presence of a Pooling layer after them, we can achieve translational invariance for certain section of the image (If the image is taken to an opposite corner, the final rep. fed to the FC layers will be different right??), since the output even slight changes in position would lead to a slight change in the output of the conv layer, but maxing or avging in the region would give us the same output, at least for small shifts. Hence, we won't we require a lot of data ( faces in every position) to generalize. Am I right here???
I also used believed the same but there is some recent research that says otherwise.
I you have many layers then the receptive field (i.e. the pixels it is influenced by) of each neuron in the last layer is huge and translation invariance becomes less of an issue. So yes you are right, but creating many layers really helps in that respect.
I came here only to learn about gaussian processes. I ended up watching ~10 hours, as if this was a TV series. Even watched lectures on things I already knew well, but just wanted your perspective. Best course really. Thank you
I just wanna say Thank you very much. You are really the best teacher for this stuff. i can't thank you enough. And please make new courses even if they are not free, i think a lot of people would like to pay for your courses
I've completed your lecture series! Thank you for your generous contribution to my understanding of machine learning!
Amazing and inspiring course. Thank you so much Professor Kilian. Your ML course was the first that I watched complete .All the 37 lectures helped me so much. And when I read new ML material, very often remember the content that I watched in your course (More frequently Gaussian Distribution, Bagging and Boosting :)) thank you so much!
Onto the last one now! but yeah, feels sad as this course comes to end. Quite interesting, informative, and highly engaging :) All thanks to our amazing professor! Please share a few more course lectures at Cornell! We'd love to level up ourselves...
Thank you! I am sure that this course will blow up someday!
Thank you very much. These lectures were great. Could you please publish the lectures for other classes like the one you mentioned about, called, "Machine Learning for Data Science" as well?
Other classes were not taught by him. I am not aware of any lecture recordings, but you might find some of the assignments and slides here: www.cs.cornell.edu/courses/cs4786/2019sp/index.htm
@@ugurkap hi Kaplan, does the classes you mentioned above has a video on line?
Thank you so much, Professor, for sharing your perspectives and knowledge to the world.
Thank you so much for putting these lectures online. I have enjoyed them all massively. I came across them while reading about decision trees, watched all of them and over the last 2 weeks have sat in my office every night and made my way through the whole course. Everyone I learnt alot and I now feel I have a way better understanding of ML to ground the rest of my learning (after I go and spend some time making up for my absence to my wife and kid :D) Would be great if you had a link to some site where I could buy you a drink, I feel like I'm in debt :)
Thanks for sharing this, I believe it is one of the best out courses out here.
An outstanding class! Filled with technical rigor and humor.
This was an amazing course, thank you Prof. Kilian!
2:58 Current research on Deep Learning
5:10 We lose information when working on images when we use a regular fully connected network. Images are translationally invariant
9:30 Convolutional layer explanation
13:30 We are restricting network to only learn functions functions that are translation invariant
16:50 Research on CovNets - Nvidia presentation
21:40 Residual networks. Skip networks. Stochastic depth
26:55 Impotent layers. Robustness because no layer is too important
28:25 Dense connectivity - DenseNet
30:30 Image Manifold - Images lie on a sub-manifold - Add/remove beards to faces
43:25 Dropout is used less these days and BatchNormalization is more common
44:20 Demo - Machine Learning for Data Science - Learn to discover structure in data - Manifolds
Adding lectures on unsupervised learning , would have taken this lecture series to an another level!☄♥️।.
Hi professor Killian,
Once again thank you very much for the great material.
I have a quick question regarding to the NN in general. I apologize in advance if I miss this part in one of the lectures (or comments).
Is feature selection necessary before any nn (or deep learning) algorithm? One would think that since it is built to solve this central problem in representation as well as the weights, it should be automatically handled...
If you have enough data (and you normalize your features) the neural net can learn if some features are irrelevant. However, you can make its life easier (and get away with less training data) if you identify useless features before you do learning. Put it this way: Anything that the network doesn't have to learn itself makes its life easier.
thank you very much for the help!
Cheers,
Gal
Couldn't resist commenting, My first youtube comment since ever. A BIG THANK YOU!
Hi! Thank you for the wonderful course! Are past exams available as I would like to test my knowledge now that I have completed the course
I am trying to play with this idea but at 35:29 I don't understand how this image is represented, what is the coordinate system? Is it like axes represent weights and biases and for each one you have an entry such as w1*x1 etc. ? At 36:46 why is it meaningful to take gradient descent to reconstruct this image? If we have w1*x1 do you take gradient descent with the respect to x1?
Thanks a lot. Wished I could have attended 'ML for Data Science' as well
Thank you professor Kilian! The lecture is really great.
What an amazing journey, thank you professor!
Finally completed. Thank you very much prof !
Great job!
Legendary course!
Thank you so much for explaining everything so clearly. So exactly how many electrons are there in the universe XD
a lot ...
you are a god my man
yeah, exactly. NN learns non-linear relationships naturally and thus it can learn manifold easily.
Is it fair to say that the idea of stochastic depth is similar to the randomization of dimensions we do before each greedy search in a random forest? Great lectures btw!
Not entirely. Stochastic depth is more a form of regularization as it forces the layers in a neural network to be similar.
Is the last layer of a deep network still considered a linear classifier even if it has a non-linear activation function? If not, does that assumption still hold?
Yes. Assuming you fix the previous layers, and treat them as feature extractors, then the last (linear) layer is essentially very similar to e.g. logistic regression. Note that logistic regression also has a (non-linear) sigmoid as output s(w'x). The key is that the function s() here acts as a thresholding / scaling function, that essentially makes sure we have output probabilities. Because it is strictly monotonic, it preserves the linearity of the decision boundary. If s() was a sin() function instead of a sigmoid, the classifier would not be linear. Hope this helps.
Killian, Thank you. ❤️
yeah, great experiment!!
DenseNet!
hahha, gooood experience :D:D:D we can only unfold it when we know before hand its structure.
the water bucket in PCA is really impressive🤣
yeah, the sur-real fact about researchers, scientists :D:D
hahhah, exactly PCA is good enough to handle many situations.
3:30
Damn, I am sad
i feel a little say too in the end