Many thanks for sharing the lecture, it's so clear and concise. Now, I've an overall better understanding of the difference between various machine learning algorithms.
In case anyone is curious, the powerpoint presentation Andrew is giving can be found just by Google searching "DeepLearning-Mar2013.pptx". If you want to see some of the slides that Andrew talks to but which do not get into the video, this is helpful.
This is all about applications of deep learning, and give an intuitive explanation of why deep learning works well. G. Hinton and Y. Bengio released several presentations online that present detailed and theoretical part of deep learning. I started my study from there.
I remember working on Natural Language Processing as an undergrad in the 80's ... it's come so far ... of course, back then we didn't compete with Neuro-Linguistic Programming for the NLP acronym.
Wow - great talk! Extremely interesting material. I've been fascinated with AI every since I was a kid. In fact, I think that's what got me into the field of software dev in the first place. Mayhap it's time to truly start playing around with neural nets and learning algorithms.
If you are having lag issues ... helper.ipam.ucla.edu/wowzavideo.aspx?vfn=10595.mp4&vfd=gss2012 This video from the site posted by David Sanders (see below) was working better for me.
I feel the same way currently - is that machine learning tends to be just curve fitting and statistics. This is not what I really want to learn when I say I want to learn AI/Machine Learning.
So up to you to invest your spare time into finding new methods, right? I think having a stereoscopic view or a time element, or both will greatly help improve image recognition algorithms. I think humans would also have a lot of trouble recognizing motorcycles if they spend their entire lives living in a world of non-moving 2D images. The reason we can pick apart objects from others is because we have seen them often, at other times, as if moving in 3D space. Humans use a combination of parallax and both eyes to map their 2D view into a 3D understanding of the world. If we want computers to make sense of the world in the same way as we do, the first preprocessing step would be to do the same: try to guess a 3th dimension onto 2D images. Just my thoughts. Might be utter bullshit :)
Well, Gabor-like filters that deep nets tend to discover are the basis of JPEG which is an important part of MPEG compression. But it's only one layer of "features" and there are no learned temporal features that take advantage of the redundancy from frame to frame. The keyframing technique does not count as a learned feature. Geoff Hinton has applied stacked restricted boltzmann machines to video of bouncing balls and found that it can learn temporal features, so yes!
It doesn't actually inspire too much confidence that some of the greatest brains on the planet developing AI that will presumably be responsible for our safety and well being can't figure out how to upload a decent quality UA-cam video ten years after its inception. ;D (jk btw)
We're a start up called "#Winning" and we're using convolutional neural networks to predict lottery numbers. We're currently training a recurrent network on coin tosses and dice rolls before moving onto the holy grail (but never holy fail) of predicting next week's lottery numbers
Should all AI have a standard eventually for a "base" brain? Should an individual AI be on a knowledge island of their own or should they all be interconnected and all be able to share their acquired learning and knowledge?
The statement at 19:30 "Humans have 20 years of experience looking at unlabeled images" is nonsense. Human experiences are always labelled by their feelings (aka rewards in machine learning).
Help!! I am currently working on spatio-temporal feature extraction from videos using deep learning. Unfortunately there hasn't been much work on it. Can anyone provide me with links on deep neural networks for video features???
I wish Andrew would move forward from text, audio and images/video as there are more interesting problems and I'd like to hear about possible solutions to these problems via algorithms.
The early pre-cursor to the tongue "display" (which apparently feels like varying degrees of soda bubbles rather than a 9V battery) was actually a giant chair that replaced pixels with little actuators. Turns out skin isn't so good with definition whereas the tongue is super dense with sensors - more "pixels per inch" if you will.
11:00 still i wonder why our brains are so similiar and these brain regions are normally structured according to one specific task, optical cortex for seeing things, and auditory cortex for understanding sound frequency modulations. Why do certain sensory task occur in the same brain regions. That is my question, thanks for the replies if there are any;D really intresting stuff AI!!!!!! 2040 the internet will be alive!;p
+Vrolijke Vent Long wires are expensive. Total connectivity in the human brain is about 15%. It's nonsense to connect A1 with V1. The genes know that and therefore brain region connections are predefined in order to speed up learning.
how can you have so much knowledge and talk about deep learning and advanced AI but not be able to record a 45 minute video? i really don't understand that...
Voxel Skull it's a chinese surname used by certain dialects, en.wikipedia.org/wiki/Ng_%28surname%29 similar (in chinese character) to those with 'Huang' as surname. You can pronounce it like "earn" but with a 'g' at the end... so well, "earng".
what do want computers to do? Play fetch like man's other best friend? Seems frivolous when you put it that way. Accept if you're one of those who believes the accurate recollection can somehow make us whole again,
I really liked this slideshow. My brain could almost learn to see his movements like this was a video.
good content. video lags constantly and becomes unconvenient for our eyes. please prevent such lags.
Many thanks for sharing the lecture, it's so clear and concise. Now, I've an overall better understanding of the difference between various machine learning algorithms.
I like Andrew Ng. He is always so humble!
In case anyone is curious, the powerpoint presentation Andrew is giving can be found just by Google searching "DeepLearning-Mar2013.pptx". If you want to see some of the slides that Andrew talks to but which do not get into the video, this is helpful.
Man Dr. Ng is such a humble man.
I'm in the field since 2010 machine learning and computer vision.. man this is a great state of the art presentation..
Nice ... really nice
This is all about applications of deep learning, and give an intuitive explanation of why deep learning works well. G. Hinton and Y. Bengio released several presentations online that present detailed and theoretical part of deep learning. I started my study from there.
I remember working on Natural Language Processing as an undergrad in the 80's ... it's come so far ...
of course, back then we didn't compete with Neuro-Linguistic Programming for the NLP acronym.
This was the video i am looking for after machine learning classes of him. I hope we'll meet in some conference.
Very good and clear development of the main mainstream ideas in modern deep belief networks
Wow - great talk! Extremely interesting material. I've been fascinated with AI every since I was a kid. In fact, I think that's what got me into the field of software dev in the first place. Mayhap it's time to truly start playing around with neural nets and learning algorithms.
what kind of software are you developing?
Aris Taukulis Currently I'm writing proprietary software for social communities based on different medical issues. RoR as it happens.
If you are having lag issues ...
helper.ipam.ucla.edu/wowzavideo.aspx?vfn=10595.mp4&vfd=gss2012 This video from the site posted by David Sanders (see below) was working better for me.
+Fifty7V thank you :)
VertigoAt1977 Your'e welcome. If you like a video like this, you are my kind of people. -Happy to help.
I feel the same way currently - is that machine learning tends to be just curve fitting and statistics. This is not what I really want to learn when I say I want to learn AI/Machine Learning.
So up to you to invest your spare time into finding new methods, right?
I think having a stereoscopic view or a time element, or both will greatly help improve image recognition algorithms. I think humans would also have a lot of trouble recognizing motorcycles if they spend their entire lives living in a world of non-moving 2D images. The reason we can pick apart objects from others is because we have seen them often, at other times, as if moving in 3D space. Humans use a combination of parallax and both eyes to map their 2D view into a 3D understanding of the world. If we want computers to make sense of the world in the same way as we do, the first preprocessing step would be to do the same: try to guess a 3th dimension onto 2D images.
Just my thoughts. Might be utter bullshit :)
Thank you very much for sharing this! That gave mere new direction to extend my knowledge
Well, Gabor-like filters that deep nets tend to discover are the basis of JPEG which is an important part of MPEG compression. But it's only one layer of "features" and there are no learned temporal features that take advantage of the redundancy from frame to frame. The keyframing technique does not count as a learned feature.
Geoff Hinton has applied stacked restricted boltzmann machines to video of bouncing balls and found that it can learn temporal features, so yes!
I would love to study side by side wth this guy!!!!!! He rules!!!!
Part 2 here: www.ipam.ucla.edu/schedule.aspx?pc=gss2012
The link doesn't seem to work anymore ...
pleiadian www.ipam.ucla.edu/programs/summer-schools/graduate-summer-school-deep-learning-feature-learning/?tab=schedule
cool. thanks a lot!
Great video, wish it wasn't so choppy.
It doesn't actually inspire too much confidence that some of the greatest brains on the planet developing AI that will presumably be responsible for our safety and well being can't figure out how to upload a decent quality UA-cam video ten years after its inception. ;D (jk btw)
The thought had crossed my mind, but I decided to keep that to myself.
Cool talk! Thanks for the video!
We're a start up called "#Winning" and we're using convolutional neural networks to predict lottery numbers. We're currently training a recurrent network on coin tosses and dice rolls before moving onto the holy grail (but never holy fail) of predicting next week's lottery numbers
Great video! Thank you!
I can listen to Andrew Ng all day...where is part 2?
Should all AI have a standard eventually for a "base" brain? Should an individual AI be on a knowledge island of their own or should they all be interconnected and all be able to share their acquired learning and knowledge?
at 24:00 the edges numbers 36 and 42 are misrepresented :)
The statement at 19:30 "Humans have 20 years of experience looking at unlabeled images" is nonsense. Human experiences are always labelled by their feelings (aka rewards in machine learning).
Great video !
In-lecture quizzes: tested on Coursera, now applied to in-class lectures.
Need an unsupervised learning algorithm to fill in the gaps of the video lag. Other than that, really good!
Great video, but intermittently very choppy for some reason.
He is using the same table of numbers to represent an image on several different talks.
Found this in a probabilistic programming playlist. When does that portion of the talk start?
thanks for sharing this video, awesome !!
Can you apply these types of techniques to video compression?
Through Digital Signal Processing (DSP) type of operations I believe
Help!!
I am currently working on spatio-temporal feature extraction from videos using deep learning. Unfortunately there hasn't been much work on it. Can anyone provide me with links on deep neural networks for video features???
Is there a reason to stop at only 4 levels of hierarchy for feature learning? Why not 16 or 32 or 256?
I wish Andrew would move forward from text, audio and images/video as there are more interesting problems and I'd like to hear about possible solutions to these problems via algorithms.
16:20 - Dacia 1300
clear and rich
6:43 textons, spin image, SIFT paper
just appreciate the video guys.
thks for uploading this video
do you have a link for part 2?
thx for the upload!
parse the data into feature vectors very interesting idea=)
Licking to feel voltages on a panel sounds dangerous and painful. Perhaps a soft touch panel would be a better monitor for the blind...
The early pre-cursor to the tongue "display" (which apparently feels like varying degrees of soda bubbles rather than a 9V battery) was actually a giant chair that replaced pixels with little actuators. Turns out skin isn't so good with definition whereas the tongue is super dense with sensors - more "pixels per inch" if you will.
thanks a ton for sharing :)
the
the first
This guy rules
i need to learn how programming language convert into machine language....
please upload your lecture videos.....
Did I hear correctly, single learning algorithm is Andrew Ng's hypothesis? Like, his invention?
no.
great!
Oh my god this is great
The choppiness is in the video.
Biology, layers: 20:15 - 33:41
Nice
Better Quality here : ua-cam.com/video/pfFyZY1RPZU/v-deo.html
It's great
11:00 still i wonder why our brains are so similiar and these brain regions are normally structured according to one specific task, optical cortex for seeing things, and auditory cortex for understanding sound frequency modulations. Why do certain sensory task occur in the same brain regions. That is my question, thanks for the replies if there are any;D really intresting stuff AI!!!!!! 2040 the internet will be alive!;p
+Vrolijke Vent Long wires are expensive. Total connectivity in the human brain is about 15%. It's nonsense to connect A1 with V1. The genes know that and therefore brain region connections are predefined in order to speed up learning.
Can you guide me to learn deep learning pls
how can you have so much knowledge and talk about deep learning and advanced AI but not be able to record a 45 minute video? i really don't understand that...
27:40 to 28:13
I want andrew G to come up with an algorithm that just says, this video has has been recorded in a crap way..
SimpLe, yet informative :)
Certainly, considering the brain does it.
Impressive...the sparse coding of him, in this video.
yeah, found you.
2012... hmm...
I think the real question here, is HOW the FUCK is his last name pronounced???
Voxel Skull ng
*****
That does not help....
David Sanders
I just found out, it's more of an "oong"... I have a friend with that last name, and he explained it..
Voxel Skull
it's a chinese surname used by certain dialects,
en.wikipedia.org/wiki/Ng_%28surname%29 similar (in chinese
character) to those with 'Huang' as surname. You can pronounce it like
"earn" but with a 'g' at the end... so well, "earng".
+Voxel Skull www.quora.com/How-does-Andrew-Ng-prefer-his-name-to-be-pronounced
Andrew Ng? Isn't this Rodger Peng?
haha hilarious
Anyone interested in this video would enjoy this: www.huffingtonpost.com.au/2015/05/13/andrew-ng_n_7267682.html
Andrew Ng interview.
Did he seriously have to defend himself for being inspired by nature? deliberately avoiding the term "evolution"??? at around 8:45 min
what do want computers to do? Play fetch like man's other best friend? Seems frivolous when you put it that way. Accept if you're one of those who believes the accurate recollection can somehow make us whole again,
LEGO CITY
no
human kind play around with their brain to finally know about their God, the Creator. . . . .
Lotfi zadah dislike the video..
Amazing video!
SimpLe, yet informative :)