Andrew Ng: Deep Learning, Self-Taught Learning and Unsupervised Feature Learning

Поділитися
Вставка
  • Опубліковано 6 січ 2025

КОМЕНТАРІ • 111

  • @veloxsouth
    @veloxsouth 11 років тому +15

    I really liked this slideshow. My brain could almost learn to see his movements like this was a video.

  • @siddharthnibjiya
    @siddharthnibjiya 9 років тому +49

    good content. video lags constantly and becomes unconvenient for our eyes. please prevent such lags.

  • @jaggar28
    @jaggar28 9 років тому +1

    Many thanks for sharing the lecture, it's so clear and concise. Now, I've an overall better understanding of the difference between various machine learning algorithms.

  • @Lycheeee11
    @Lycheeee11 9 років тому

    I like Andrew Ng. He is always so humble!

  • @therealadrian
    @therealadrian 11 років тому

    In case anyone is curious, the powerpoint presentation Andrew is giving can be found just by Google searching "DeepLearning-Mar2013.pptx". If you want to see some of the slides that Andrew talks to but which do not get into the video, this is helpful.

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 років тому

    Man Dr. Ng is such a humble man.

  • @alexander_adnan
    @alexander_adnan 7 років тому

    I'm in the field since 2010 machine learning and computer vision.. man this is a great state of the art presentation..
    Nice ... really nice

  • @chunyangzhang2948
    @chunyangzhang2948 10 років тому +1

    This is all about applications of deep learning, and give an intuitive explanation of why deep learning works well. G. Hinton and Y. Bengio released several presentations online that present detailed and theoretical part of deep learning. I started my study from there.

  • @CherieOliviaAnondananda
    @CherieOliviaAnondananda 9 років тому

    I remember working on Natural Language Processing as an undergrad in the 80's ... it's come so far ...
    of course, back then we didn't compete with Neuro-Linguistic Programming for the NLP acronym.

  • @msgeinstein
    @msgeinstein 8 років тому +1

    This was the video i am looking for after machine learning classes of him. I hope we'll meet in some conference.

  • @douglaskell1275
    @douglaskell1275 9 років тому

    Very good and clear development of the main mainstream ideas in modern deep belief networks

  • @CreationTribe
    @CreationTribe 10 років тому +2

    Wow - great talk! Extremely interesting material. I've been fascinated with AI every since I was a kid. In fact, I think that's what got me into the field of software dev in the first place. Mayhap it's time to truly start playing around with neural nets and learning algorithms.

    • @aristaukulis4275
      @aristaukulis4275 10 років тому

      what kind of software are you developing?

    • @CreationTribe
      @CreationTribe 10 років тому

      Aris Taukulis Currently I'm writing proprietary software for social communities based on different medical issues. RoR as it happens.

  • @N3ur0m4nc3r
    @N3ur0m4nc3r 9 років тому +26

    If you are having lag issues ...
    helper.ipam.ucla.edu/wowzavideo.aspx?vfn=10595.mp4&vfd=gss2012 This video from the site posted by David Sanders (see below) was working better for me.

    • @VertigoAt1977
      @VertigoAt1977 8 років тому +2

      +Fifty7V thank you :)

    • @N3ur0m4nc3r
      @N3ur0m4nc3r 8 років тому +1

      VertigoAt1977 Your'e welcome. If you like a video like this, you are my kind of people. -Happy to help.

  • @Chorizzosoup
    @Chorizzosoup 11 років тому +3

    I feel the same way currently - is that machine learning tends to be just curve fitting and statistics. This is not what I really want to learn when I say I want to learn AI/Machine Learning.

    • @Pianofy
      @Pianofy 10 років тому +1

      So up to you to invest your spare time into finding new methods, right?
      I think having a stereoscopic view or a time element, or both will greatly help improve image recognition algorithms. I think humans would also have a lot of trouble recognizing motorcycles if they spend their entire lives living in a world of non-moving 2D images. The reason we can pick apart objects from others is because we have seen them often, at other times, as if moving in 3D space. Humans use a combination of parallax and both eyes to map their 2D view into a 3D understanding of the world. If we want computers to make sense of the world in the same way as we do, the first preprocessing step would be to do the same: try to guess a 3th dimension onto 2D images.
      Just my thoughts. Might be utter bullshit :)

  • @sikor02
    @sikor02 9 років тому

    Thank you very much for sharing this! That gave mere new direction to extend my knowledge

  • @nhnifong
    @nhnifong 11 років тому

    Well, Gabor-like filters that deep nets tend to discover are the basis of JPEG which is an important part of MPEG compression. But it's only one layer of "features" and there are no learned temporal features that take advantage of the redundancy from frame to frame. The keyframing technique does not count as a learned feature.
    Geoff Hinton has applied stacked restricted boltzmann machines to video of bouncing balls and found that it can learn temporal features, so yes!

  • @TheodorosKatsikis
    @TheodorosKatsikis 11 років тому +2

    I would love to study side by side wth this guy!!!!!! He rules!!!!

  • @allensirolly2495
    @allensirolly2495 10 років тому +9

    Part 2 here: www.ipam.ucla.edu/schedule.aspx?pc=gss2012

    • @pleiadian
      @pleiadian 10 років тому

      The link doesn't seem to work anymore ...

    • @allensirolly2495
      @allensirolly2495 10 років тому +3

      pleiadian www.ipam.ucla.edu/programs/summer-schools/graduate-summer-school-deep-learning-feature-learning/?tab=schedule

    • @pleiadian
      @pleiadian 10 років тому

      cool. thanks a lot!

  • @bujin5455
    @bujin5455 8 років тому +14

    Great video, wish it wasn't so choppy.

    • @spiritusinfinitus
      @spiritusinfinitus 8 років тому +2

      It doesn't actually inspire too much confidence that some of the greatest brains on the planet developing AI that will presumably be responsible for our safety and well being can't figure out how to upload a decent quality UA-cam video ten years after its inception. ;D (jk btw)

    • @bujin5455
      @bujin5455 8 років тому

      The thought had crossed my mind, but I decided to keep that to myself.

  • @urjeans2896
    @urjeans2896 10 років тому

    Cool talk! Thanks for the video!

  • @johnvonhorn2942
    @johnvonhorn2942 8 років тому

    We're a start up called "#Winning" and we're using convolutional neural networks to predict lottery numbers. We're currently training a recurrent network on coin tosses and dice rolls before moving onto the holy grail (but never holy fail) of predicting next week's lottery numbers

  • @AileenFang0000
    @AileenFang0000 10 років тому

    Great video! Thank you!
    I can listen to Andrew Ng all day...where is part 2?

  • @imaekgames
    @imaekgames 9 років тому

    Should all AI have a standard eventually for a "base" brain? Should an individual AI be on a knowledge island of their own or should they all be interconnected and all be able to share their acquired learning and knowledge?

  • @ishirraj8554
    @ishirraj8554 9 років тому

    at 24:00 the edges numbers 36 and 42 are misrepresented :)

  • @wlorenz65
    @wlorenz65 8 років тому +1

    The statement at 19:30 "Humans have 20 years of experience looking at unlabeled images" is nonsense. Human experiences are always labelled by their feelings (aka rewards in machine learning).

  • @azmodaipaul8023
    @azmodaipaul8023 10 років тому

    Great video !

  • @sergeysmyshlyaev9716
    @sergeysmyshlyaev9716 11 років тому

    In-lecture quizzes: tested on Coursera, now applied to in-class lectures.

  • @TobiasDeml
    @TobiasDeml 6 років тому

    Need an unsupervised learning algorithm to fill in the gaps of the video lag. Other than that, really good!

  • @fungiside
    @fungiside 11 років тому +11

    Great video, but intermittently very choppy for some reason.

  • @user-UH5thojFkPng
    @user-UH5thojFkPng 7 років тому

    He is using the same table of numbers to represent an image on several different talks.

  • @chrisanderson1513
    @chrisanderson1513 8 років тому

    Found this in a probabilistic programming playlist. When does that portion of the talk start?

  • @assaad33
    @assaad33 8 років тому

    thanks for sharing this video, awesome !!

  • @JasonLivesay
    @JasonLivesay 11 років тому +1

    Can you apply these types of techniques to video compression?

    • @fabianlobrij7134
      @fabianlobrij7134 4 роки тому

      Through Digital Signal Processing (DSP) type of operations I believe

  • @akash1playboy
    @akash1playboy 11 років тому

    Help!!
    I am currently working on spatio-temporal feature extraction from videos using deep learning. Unfortunately there hasn't been much work on it. Can anyone provide me with links on deep neural networks for video features???

  • @clearmenser
    @clearmenser 11 років тому

    Is there a reason to stop at only 4 levels of hierarchy for feature learning? Why not 16 or 32 or 256?

  • @isospin8u
    @isospin8u 9 років тому

    I wish Andrew would move forward from text, audio and images/video as there are more interesting problems and I'd like to hear about possible solutions to these problems via algorithms.

  • @IuliusCurt
    @IuliusCurt 10 років тому +4

    16:20 - Dacia 1300

  • @seaslee2010
    @seaslee2010 10 років тому

    clear and rich

  • @jayp6955
    @jayp6955 9 років тому

    6:43 textons, spin image, SIFT paper

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 років тому

    just appreciate the video guys.

  • @neotechfriend
    @neotechfriend 9 років тому

    thks for uploading this video

  • @amgadmuhammad2958
    @amgadmuhammad2958 11 років тому

    do you have a link for part 2?

  • @stiLLa2000
    @stiLLa2000 9 років тому

    thx for the upload!

  • @K4moo
    @K4moo 10 років тому

    parse the data into feature vectors very interesting idea=)

  • @SweetHyunho
    @SweetHyunho 10 років тому

    Licking to feel voltages on a panel sounds dangerous and painful. Perhaps a soft touch panel would be a better monitor for the blind...

    • @peterdickinson1936
      @peterdickinson1936 10 років тому

      The early pre-cursor to the tongue "display" (which apparently feels like varying degrees of soda bubbles rather than a 9V battery) was actually a giant chair that replaced pixels with little actuators. Turns out skin isn't so good with definition whereas the tongue is super dense with sensors - more "pixels per inch" if you will.

  • @ankic7745
    @ankic7745 9 років тому

    thanks a ton for sharing :)

  • @ProgrammingTime
    @ProgrammingTime 11 років тому

    This guy rules

  • @Shiro77_shorts
    @Shiro77_shorts 7 років тому

    i need to learn how programming language convert into machine language....
    please upload your lecture videos.....

  • @vdoundakov
    @vdoundakov 10 років тому

    Did I hear correctly, single learning algorithm is Andrew Ng's hypothesis? Like, his invention?

  • @joseinTokyo
    @joseinTokyo 10 років тому +1

    great!

  • @hammashamzah
    @hammashamzah 9 років тому

    Oh my god this is great

  • @m3po22
    @m3po22 6 років тому

    The choppiness is in the video.
    Biology, layers: 20:15 - 33:41

  • @ganeshjonna
    @ganeshjonna 10 років тому +1

    Nice

  • @gabrielchichi5692
    @gabrielchichi5692 5 років тому +1

    Better Quality here : ua-cam.com/video/pfFyZY1RPZU/v-deo.html

  • @fangweixu6243
    @fangweixu6243 7 років тому

    It's great

  • @troykillah
    @troykillah 9 років тому

    11:00 still i wonder why our brains are so similiar and these brain regions are normally structured according to one specific task, optical cortex for seeing things, and auditory cortex for understanding sound frequency modulations. Why do certain sensory task occur in the same brain regions. That is my question, thanks for the replies if there are any;D really intresting stuff AI!!!!!! 2040 the internet will be alive!;p

    • @wlorenz65
      @wlorenz65 8 років тому +1

      +Vrolijke Vent Long wires are expensive. Total connectivity in the human brain is about 15%. It's nonsense to connect A1 with V1. The genes know that and therefore brain region connections are predefined in order to speed up learning.

  • @sriprasanth1101
    @sriprasanth1101 4 роки тому

    Can you guide me to learn deep learning pls

  • @StriderAngel496
    @StriderAngel496 9 років тому +1

    how can you have so much knowledge and talk about deep learning and advanced AI but not be able to record a 45 minute video? i really don't understand that...

  • @mmm2008mmm
    @mmm2008mmm 10 років тому

    27:40 to 28:13

  • @superjaykramer
    @superjaykramer 11 років тому

    I want andrew G to come up with an algorithm that just says, this video has has been recorded in a crap way..

  • @ahmed101989
    @ahmed101989 11 років тому

    SimpLe, yet informative :)

  • @justonium
    @justonium 11 років тому

    Certainly, considering the brain does it.

  • @badhombre4942
    @badhombre4942 3 роки тому

    Impressive...the sparse coding of him, in this video.

  • @mingzhang6293
    @mingzhang6293 11 років тому

    yeah, found you.

  • @happyjohn1656
    @happyjohn1656 5 років тому +1

    2012... hmm...

  • @WilliamKKidd
    @WilliamKKidd 9 років тому +7

    I think the real question here, is HOW the FUCK is his last name pronounced???

    • @ChristianToh
      @ChristianToh 9 років тому +5

      Voxel Skull ng

    • @WilliamKKidd
      @WilliamKKidd 9 років тому

      *****
      That does not help....

    • @WilliamKKidd
      @WilliamKKidd 9 років тому +1

      David Sanders
      I just found out, it's more of an "oong"... I have a friend with that last name, and he explained it..

    • @JohnSee80
      @JohnSee80 9 років тому +1

      Voxel Skull
      it's a chinese surname used by certain dialects,
      en.wikipedia.org/wiki/Ng_%28surname%29 similar (in chinese
      character) to those with 'Huang' as surname. You can pronounce it like
      "earn" but with a 'g' at the end... so well, "earng".

    • @Lycheeee11
      @Lycheeee11 9 років тому

      +Voxel Skull www.quora.com/How-does-Andrew-Ng-prefer-his-name-to-be-pronounced

  • @mswai5020
    @mswai5020 9 років тому +2

    Andrew Ng? Isn't this Rodger Peng?

  • @m.r.wiggins1537
    @m.r.wiggins1537 9 років тому +1

    Anyone interested in this video would enjoy this: www.huffingtonpost.com.au/2015/05/13/andrew-ng_n_7267682.html
    Andrew Ng interview.

  • @gcgrabodan
    @gcgrabodan 8 років тому +2

    Did he seriously have to defend himself for being inspired by nature? deliberately avoiding the term "evolution"??? at around 8:45 min

  • @weewilly2007
    @weewilly2007 9 років тому

    what do want computers to do? Play fetch like man's other best friend? Seems frivolous when you put it that way. Accept if you're one of those who believes the accurate recollection can somehow make us whole again,

  • @jonasslrdahlskjrpe3317
    @jonasslrdahlskjrpe3317 9 років тому

    LEGO CITY

  • @Jensbolte
    @Jensbolte 11 років тому

    no

  • @_.-._.-._.-_.-._.-._.-_.-._.-
    @_.-._.-._.-_.-._.-._.-_.-._.- 9 років тому

    human kind play around with their brain to finally know about their God, the Creator. . . . .

  • @elhossinyhaitham
    @elhossinyhaitham 10 років тому

    Lotfi zadah dislike the video..

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 років тому

    Amazing video!

  • @ahmed101989
    @ahmed101989 11 років тому

    SimpLe, yet informative :)