Machine Learning Lecture 37 "Neural Networks / Deep Learning" -Cornell CS4780 SP17

Поділитися
Вставка
  • Опубліковано 16 жов 2024

КОМЕНТАРІ • 85

  • @kirtanpatel797
    @kirtanpatel797 4 роки тому +37

    Just completed all 37 Lectures :) This is the only course that forced me to come back, and complete entire series. It's only because of you Great Sir! Thank you so much for sharing these !

  • @Biesterable
    @Biesterable 5 років тому +26

    This was wonderfull!! It's strange that not more people are watching this. Thank you so much for sharing!

    • @amuro9616
      @amuro9616 5 років тому

      Exactly. One of most approcahble and intutive lectures on ML there is.

  • @cricketjanoon
    @cricketjanoon 4 роки тому +8

    I started learning ML in 2017 when I was an undergrad student and now I am a graduate student. I took many courses and read many books but these lectures cleared many tiny details and concept which I was missing. Spend my COVID-19 summer watching the whole series. Thank Killian!

  • @jy9p4
    @jy9p4 5 років тому +8

    This was hands down the best lecture series I have seen in my life. I watched at least one video over the past three weeks, wrote notes along the way, and even tried the homework problems.
    Wow, what a ride. Thanks, Professor Weinberger!

    • @tubakias1
      @tubakias1 5 років тому +1

      Where can we find the homeworks? Thanks

    • @jy9p4
      @jy9p4 5 років тому +2

      @@tubakias1 Here's the link! www.dropbox.com/s/tbxnjzk5w67u0sp/Homeworks.zip?dl=0

  • @dantemlima
    @dantemlima 5 років тому +18

    Thank you, professor Kilian! What a great teacher you are! I learned a lot and laughed a lot. Awesome!

  • @satviktripathi6601
    @satviktripathi6601 4 роки тому +6

    It took me two months to complete this course, and my knowledge level has drastically changed! Thank you so much!

    • @kilianweinberger698
      @kilianweinberger698  4 роки тому +3

      Great job!

    • @satviktripathi6601
      @satviktripathi6601 4 роки тому +5

      @@kilianweinberger698 Sir, I can't believe you replied, I am a high school senior and have applied to Cornell! - Really hope to meet you one day!

    • @benxneo
      @benxneo 11 місяців тому

      @@satviktripathi6601 did you get into Cornell

  • @linxingyao9311
    @linxingyao9311 3 роки тому +4

    All I can say it is the Holy Grail of Machine Learning lectures. Thank you, Professor Kilian.

  • @jachawkvr
    @jachawkvr 4 роки тому +5

    This class was so amazing and I learnt so many useful concepts. What I loved most was Dr.Weinberger's engaging and intuitive delivery which made the complex concepts so easy to grasp. He is also funny as hell, which made the classes a lot of fun. A big thank you from my side to Dr.Weinberger for sharing these wonderful lectures as well as the assignments.

  • @zelazo81
    @zelazo81 4 роки тому +4

    It took me 4 months but I've finally completed watching your series of lectures! You made it extremely informative, intuitive and fun and you have a great teaching style :) Thank you!

  • @xuanwu8045
    @xuanwu8045 5 років тому +18

    This is a wonderful machine learning course. I watched several machine learning/deep learning related courses on UA-cam. This is my favorite one. In my opinion, a good teacher generally has one of the 2 traits: 1. Make the learning process easier for students by giving Illuminating lectures. 2. Want the students to learn from the heart and motivate students to learn by displaying his/her own passion about the subject. Professor Kilian has both traits. This makes me really enjoy watching this course.
    Thank you Kilian!

  • @nicksaraev
    @nicksaraev 2 роки тому

    Thank you for the delightful class, Kilian! With ML making significant strides over the last few months, I was looking for a course that thoroughly and sufficiently explained the foundations behind it. This was it. Dutifully recommended you to all of my friends who are interested in the subject.

  • @halfmoonliu
    @halfmoonliu 4 роки тому +2

    Dear Prof. Weinberger,
    It's a privilege to be able to listen to the whole series, from the very beginning to the very end. It really helped me getting through some parts that I was not very sure about. Thank you very much!

  • @sashwotsedhai2836
    @sashwotsedhai2836 Рік тому

    Thank you, professor Kilian! Thank you for these amazing lectures. Finally finished the whole series, and I feel like this is just the beginning.

  • @dnalor8753
    @dnalor8753 2 роки тому

    your humor made these lectures very enjoyable

  • @karansawhney2906
    @karansawhney2906 4 роки тому +2

    Hello Dr. Weinberger. Your videos are hands down the best I've ever seen in terms of setting up intuition and explaining the concepts in the easiest way possible. This has helped me immensely in my studies. Thank you so much!!

  • @zeroes_ones
    @zeroes_ones 9 місяців тому

    Thank you Kilian, your lectures has bought in a completely new perception/understanding(which was missing earlier) on how machine learning algorithms work. Your lectures also made me to appreciate Machine Learning even more. Thank you is a small word. May you always be blessed with good health and happines.

  • @sharique7214
    @sharique7214 4 роки тому +1

    This is such s wonderful course. I have come across so many machine learning courses, blogs, videos but this was the best I came across.
    I sort of binged watched in during quarantine, playing back the lecture to note down so many things you explained.
    Thanks a lot Professor Killian!

  • @yaolinxing1968
    @yaolinxing1968 5 років тому +3

    Very Illuminating lectures. This series should have been popular as Andrew.NG's classic one. Thank you, professor Kilian.

  • @icewave3031
    @icewave3031 10 місяців тому

    I lost it at the cinnamon roll part. Thanks for posting these! They have been very helpful for studying

  • @gowtham6071
    @gowtham6071 2 роки тому +1

    I just love this course, everything is both intuitive and mathematically deep. Loved the course so much that I finished everything in 21 days.

  • @autrepseudo1980
    @autrepseudo1980 4 роки тому +6

    Just finished the series. It was great, I'm kinda sad now! Thanks professor Weinberger. I wish I had you as a prof in college!

  • @saitrinathdubba
    @saitrinathdubba 5 років тому +7

    Thanks a lot for brilliant lectures, prof. Kilian. It was Awesome fun and extremely insightful !!

  • @manogyakaushik8924
    @manogyakaushik8924 3 роки тому +1

    Completed all the lectures and absolutely loved them! Professor, you are really inspiring. Thank you so much for sharing these here.

  • @chaowang3093
    @chaowang3093 3 роки тому +1

    Today, I am going to complete all the lectures!!! This is a legendary course that should have a similar number of views as Dr.Gilbert strang's linear algebra. Thank you so much, Dr. Kilian!!!

  • @yogeshdhingra4070
    @yogeshdhingra4070 4 роки тому +2

    I hope you are safe and sound!! Just wanted to say Thank you for the amazing lecture series. I have tear in my eyes... Professor Killian..you're the best!! I hope you add more videos related to Machine learning and Deep learning in the future.

  • @davejung8732
    @davejung8732 4 роки тому +2

    Just Loved the whole lecture series :) It's so hard to find a series of lectures on youtube which motivates you to go back and go through the whole thing, but your lectures I succeeded in watching every one of them and also doing the homeworks :)) Thank you for the resource and love your sense of humor LOLLL

  • @amarshahchowdry9727
    @amarshahchowdry9727 3 роки тому +3

    I honestly can't thank you enough for this series. Thank you so much Kilian.
    Just wanted to confirm that this translational invariance is due to the combination of Conv layers as well as a pooling layer right. Cause Conv layers by themselves are translational equivarient. With the presence of a Pooling layer after them, we can achieve translational invariance for certain section of the image (If the image is taken to an opposite corner, the final rep. fed to the FC layers will be different right??), since the output even slight changes in position would lead to a slight change in the output of the conv layer, but maxing or avging in the region would give us the same output, at least for small shifts. Hence, we won't we require a lot of data ( faces in every position) to generalize. Am I right here???

    • @arshtangri5210
      @arshtangri5210 3 роки тому +2

      I also used believed the same but there is some recent research that says otherwise.

    • @kilianweinberger698
      @kilianweinberger698  3 роки тому

      I you have many layers then the receptive field (i.e. the pixels it is influenced by) of each neuron in the last layer is huge and translation invariance becomes less of an issue. So yes you are right, but creating many layers really helps in that respect.

  • @Jeirown
    @Jeirown 4 роки тому +1

    I came here only to learn about gaussian processes. I ended up watching ~10 hours, as if this was a TV series. Even watched lectures on things I already knew well, but just wanted your perspective. Best course really. Thank you

  • @jordankuzmanovik5297
    @jordankuzmanovik5297 4 роки тому +2

    I just wanna say Thank you very much. You are really the best teacher for this stuff. i can't thank you enough. And please make new courses even if they are not free, i think a lot of people would like to pay for your courses

  • @TrentTube
    @TrentTube 4 роки тому +1

    I've completed your lecture series! Thank you for your generous contribution to my understanding of machine learning!

  • @andresguzman5665
    @andresguzman5665 3 роки тому

    Amazing and inspiring course. Thank you so much Professor Kilian. Your ML course was the first that I watched complete .All the 37 lectures helped me so much. And when I read new ML material, very often remember the content that I watched in your course (More frequently Gaussian Distribution, Bagging and Boosting :)) thank you so much!

  • @Ankansworld
    @Ankansworld 3 роки тому +1

    Onto the last one now! but yeah, feels sad as this course comes to end. Quite interesting, informative, and highly engaging :) All thanks to our amazing professor! Please share a few more course lectures at Cornell! We'd love to level up ourselves...

  • @StarzzLAB
    @StarzzLAB 3 роки тому +2

    Thank you! I am sure that this course will blow up someday!

  • @Shkencetari
    @Shkencetari 5 років тому +5

    Thank you very much. These lectures were great. Could you please publish the lectures for other classes like the one you mentioned about, called, "Machine Learning for Data Science" as well?

    • @ugurkap
      @ugurkap 5 років тому

      Other classes were not taught by him. I am not aware of any lecture recordings, but you might find some of the assignments and slides here: www.cs.cornell.edu/courses/cs4786/2019sp/index.htm

    • @吴吉人
      @吴吉人 2 роки тому

      @@ugurkap hi Kaplan, does the classes you mentioned above has a video on line?

  • @rahulchowdhury9739
    @rahulchowdhury9739 2 роки тому

    Thank you so much, Professor, for sharing your perspectives and knowledge to the world.

  • @danallford7144
    @danallford7144 2 роки тому

    Thank you so much for putting these lectures online. I have enjoyed them all massively. I came across them while reading about decision trees, watched all of them and over the last 2 weeks have sat in my office every night and made my way through the whole course. Everyone I learnt alot and I now feel I have a way better understanding of ML to ground the rest of my learning (after I go and spend some time making up for my absence to my wife and kid :D) Would be great if you had a link to some site where I could buy you a drink, I feel like I'm in debt :)

  • @ugurkap
    @ugurkap 5 років тому +1

    Thanks for sharing this, I believe it is one of the best out courses out here.

  • @108PRS
    @108PRS 3 роки тому

    An outstanding class! Filled with technical rigor and humor.

  • @RHardy25
    @RHardy25 3 роки тому +1

    This was an amazing course, thank you Prof. Kilian!

  • @michaelmellinger2324
    @michaelmellinger2324 2 роки тому

    2:58 Current research on Deep Learning
    5:10 We lose information when working on images when we use a regular fully connected network. Images are translationally invariant
    9:30 Convolutional layer explanation
    13:30 We are restricting network to only learn functions functions that are translation invariant
    16:50 Research on CovNets - Nvidia presentation
    21:40 Residual networks. Skip networks. Stochastic depth
    26:55 Impotent layers. Robustness because no layer is too important
    28:25 Dense connectivity - DenseNet
    30:30 Image Manifold - Images lie on a sub-manifold - Add/remove beards to faces
    43:25 Dropout is used less these days and BatchNormalization is more common
    44:20 Demo - Machine Learning for Data Science - Learn to discover structure in data - Manifolds

  • @HhhHhh-et5yk
    @HhhHhh-et5yk 4 роки тому

    Adding lectures on unsupervised learning , would have taken this lecture series to an another level!☄♥️।.

  • @galhulli5081
    @galhulli5081 4 роки тому +1

    Hi professor Killian,
    Once again thank you very much for the great material.
    I have a quick question regarding to the NN in general. I apologize in advance if I miss this part in one of the lectures (or comments).
    Is feature selection necessary before any nn (or deep learning) algorithm? One would think that since it is built to solve this central problem in representation as well as the weights, it should be automatically handled...

    • @kilianweinberger698
      @kilianweinberger698  4 роки тому +1

      If you have enough data (and you normalize your features) the neural net can learn if some features are irrelevant. However, you can make its life easier (and get away with less training data) if you identify useless features before you do learning. Put it this way: Anything that the network doesn't have to learn itself makes its life easier.

    • @galhulli5081
      @galhulli5081 4 роки тому

      thank you very much for the help!
      Cheers,
      Gal

  • @jeet3797
    @jeet3797 4 роки тому +1

    Couldn't resist commenting, My first youtube comment since ever. A BIG THANK YOU!

  • @alexstar8512
    @alexstar8512 3 роки тому

    Hi! Thank you for the wonderful course! Are past exams available as I would like to test my knowledge now that I have completed the course

  • @71sephiroth
    @71sephiroth 4 роки тому

    I am trying to play with this idea but at 35:29 I don't understand how this image is represented, what is the coordinate system? Is it like axes represent weights and biases and for each one you have an entry such as w1*x1 etc. ? At 36:46 why is it meaningful to take gradient descent to reconstruct this image? If we have w1*x1 do you take gradient descent with the respect to x1?

  • @vocabularybytesbypriyankgo1558
    @vocabularybytesbypriyankgo1558 10 днів тому

    Thanks a lot. Wished I could have attended 'ML for Data Science' as well

  • @thinkingaloud1833
    @thinkingaloud1833 3 роки тому

    Thank you professor Kilian! The lecture is really great.

  • @divykala169
    @divykala169 3 роки тому

    What an amazing journey, thank you professor!

  • @madhurgarg4114
    @madhurgarg4114 2 роки тому

    Finally completed. Thank you very much prof !

  • @daniilzaytsev2040
    @daniilzaytsev2040 3 місяці тому

    Legendary course!

  • @susansun4130
    @susansun4130 3 роки тому +1

    Thank you so much for explaining everything so clearly. So exactly how many electrons are there in the universe XD

  • @louis6720
    @louis6720 4 роки тому +2

    you are a god my man

  • @gregmakov2680
    @gregmakov2680 2 роки тому

    yeah, exactly. NN learns non-linear relationships naturally and thus it can learn manifold easily.

  • @clubmaster2012
    @clubmaster2012 4 роки тому

    Is it fair to say that the idea of stochastic depth is similar to the randomization of dimensions we do before each greedy search in a random forest? Great lectures btw!

    • @kilianweinberger698
      @kilianweinberger698  4 роки тому

      Not entirely. Stochastic depth is more a form of regularization as it forces the layers in a neural network to be similar.

  • @dude8309
    @dude8309 4 роки тому

    Is the last layer of a deep network still considered a linear classifier even if it has a non-linear activation function? If not, does that assumption still hold?

    • @kilianweinberger698
      @kilianweinberger698  4 роки тому

      Yes. Assuming you fix the previous layers, and treat them as feature extractors, then the last (linear) layer is essentially very similar to e.g. logistic regression. Note that logistic regression also has a (non-linear) sigmoid as output s(w'x). The key is that the function s() here acts as a thresholding / scaling function, that essentially makes sure we have output probabilities. Because it is strictly monotonic, it preserves the linearity of the decision boundary. If s() was a sin() function instead of a sigmoid, the classifier would not be linear. Hope this helps.

  • @sunilkumarmeena2450
    @sunilkumarmeena2450 2 роки тому

    Killian, Thank you. ❤️

  • @gregmakov2680
    @gregmakov2680 2 роки тому

    yeah, great experiment!!

  • @kc1299
    @kc1299 3 роки тому +2

    DenseNet!

  • @gregmakov2680
    @gregmakov2680 2 роки тому

    hahha, gooood experience :D:D:D we can only unfold it when we know before hand its structure.

  • @吴吉人
    @吴吉人 2 роки тому

    the water bucket in PCA is really impressive🤣

  • @gregmakov2680
    @gregmakov2680 2 роки тому

    yeah, the sur-real fact about researchers, scientists :D:D

  • @gregmakov2680
    @gregmakov2680 2 роки тому

    hahhah, exactly PCA is good enough to handle many situations.

  • @shrishtrivedi2652
    @shrishtrivedi2652 3 роки тому

    3:30

  • @itachi4alltime
    @itachi4alltime 2 роки тому

    Damn, I am sad

    • @吴吉人
      @吴吉人 2 роки тому

      i feel a little say too in the end