Geoffrey Hinton: The Foundations of Deep Learning
Вставка
- Опубліковано 14 тра 2024
- Godfather of artificial intelligence Geoffrey Hinton gives an overview of the foundations of deep learning. In this talk, Hinton breaks down the advances of neural networks, as applied to speech and object recognition, image segmentation and reading or generating natural written language.
#ElevateTechFest
For more info visit:
Website: elevatetechfest.com
Twitter: @elevatetechfest
Facebook: @elevatetechfest
Instagram: @elevatetechfest - Наука та технологія
What a luxury to be able to watch this on-demand, from anywhere, and for free!
Always been fascinated with computer neural networks. Exciting times!
Man he has a great sense of humour
and has the clarity at the same time.
This is extraordinary!
loved this, thank you!
No, thank YOU for checking us out!
deeply insightful
watching such an amazing video at 5 am
amazing ..thank you
This was a great presentation
Glad you enjoyed it!
"so if you're obsessed with only being one correct answer and you're being able to prove you get it, backpropagation is not for you nor is LIFE"
classic Geoffrey
He hasn't lost the British sarcasm
Actually by that line prof Hinton refers to the time when no one believed in his line and work of research and that decade taught him a great deal of patience until the technology reached a certain level to cope up with his knowledge.
@Ernest Alonzo WOW.. Do you think people who watch these video fall for you?
That was the best of his one-liners and zingers.
A tensor for paradigm shifts.
13:13 this photo of rnn is most best photo I’ve ever seen about rnns
loved this, thank you!
Geoffrey Hinton sir has the super clarity about his thoughts. I love it. He does not beat around the bush.
He should be put through court for paying with the future of humanity. Of us all so he can make a big stash of money and we see our children have no future ..That is what happens with mad scientists who let their vanity take hold of them.
You can make inside-out neural networks with fixed weighted sums (dot products) and adjustable (parametric) activation functions. Rather than the other way around.
Then you are free to use high speed fast transforms for the fixed dot products.
Cool it with the 1960's slide presentation....definitely needs a producer..
In the nutshell, Artificial Neural Network is just a parametric composite function trained using the chain rule of derivative.
Gracias doc, que LINDO Mundo en El que sumerced SE mueve !
He is a Genius..Period.
AND, you must be a complete moron for believing such !
Or perhaps simply, just riding the gravy train... Keep it go'in Buddy !
about the singularity
As Ray Kurzweil says
When the whole universe becomes a computer,
What does it calculate?
Even though the purpose for calculating has already disappeared?
I like this guy.
Recommendable
smart!long life
17:16 Hinton flips off the audience
Are hidden Markov models as relevant as Ray Kurzweil suggested?
11:27 Tomáš Mikolov just smiled :))
17:16 :D
17:18 🤣🤣🤣
🤖🥇🤣🐬
19:23 word2vec
20:51
Hinton is giving us the finger at 17:15 xD
I really love you!!!❤
this guy was waaaay ahead of his time
he casually defined what a thought is, and this thought never left my head 5 years ago
Knowledge is accessible data.Intelligence is ability to infer.
paul mitchell like this quote
Absolutely fascinating for a dummy like me
Process or singularity is the Heart of AI.
Alan Turing clearly define this in his 1936 paper on computable numbers.
Input > Process > Output (Turing complete expression or circular)
Process (Turing incomplete expression or non-circular)
!DA
RNN is fun. I made a RNN for translation (word in, "thoughts" (RNN) works out), over the breaks in 1 week in high school in early 80ties. It could learn new words, structures and did an ok translation. Just made it for fun, like all the NN, AI at that time.
Could someone clarify? He explained that backprop is better than "mutation" because backprop is parallelized vs. serialized -- but his explanation doesnt convey why backprop is achievable in parallel?
Maybe if you have a neg. and a positive direction then you’d be best covering the known ground. I don’t know either.
Because of the way the vector math works, you can calculate the difference in the result that a change to any one weight or bias would cause. Therefore you can adjust the result by manipulating multiple weights or biases at the same time. Here's a pretty good video explaining back prop more in-depth: ua-cam.com/video/Ilg3gGewQ5U/v-deo.html
9:37 💜
9:47 :D
Used evolutionary algorithm for training NN's in the 80ties. Worked well already then. Limited hardware, had to program all myself, but I got general AI that could be used for almost anything.
At 8:59 some comments about RIM. What is RIM?
Research in motion, the original name for Blackberry who used to make smart phones.
How can someone be intelligent enough to invent neural networks and entertaining enough to give a presentation like that. Its almost unfair lol.
and you forgot how clearly he explained the 'key concepts' there is maybe 300 pages of a typical book on the subject compressed into about 20 phrases in this talk.
He did not invent neural networks
@@shubhamp.4155 but he brought to life many of them and redesigned completly
It is just math, and nothing else. So if one understands math at academic level can easly learn how neural networks will work and how to manipulate notations on them to get different types. And i am not talking about "oh learn Neural networks in 5 minutes" and people show you how to program a model in tensorflow.
what is the time of this
speech ?
This speech took place in September 13, 2017 at Elevate TechFest.
1998
ethan chow you e d
Great job Geoff. Pray that God would bless your life!
Wow :)
Learning is not necessarily intelligence.More rapid learning means to twig the concept and apply the concept to new problems.
smart!long life
I'm proud to be his student's student 😆 I graduated with a thesis related to AI though have no fundamental knowledge about it 🤣
Looking back.. that time was truly a miracle to complete the whole thesis ^^ & also thanks for the accident too. Maybe confronting the bandages, the council was more lenient 😁
25:00
Geoff himself doesn't even know how it works! It just works!!! What kind of scientist could be satisfied with this type of "reasoning"? HYPE
He failed to give any sensical reasons to his recent assertion about the dangers of AI. He does, however, seem to have issues with google, perhaps he's angry at something google didn't do for him?
If you look at ChatGPT, a leader of a country who is failing politically could ask chatGPT to suggest ways to cling on to power. Now, the answer could result in bloodshed. Another country could use chatGPT like system to control the entire defence establishment. That is the danger of AI according to Junaid Maubeen - Author of Mathematical Intelligence
@@prashanpremaratneAU He seems to be stupid, the author.
14:47 symbols go in, symbols go out but in the middle it can't be "symbols"!
Damn, he looks like Palpatine here
lol yeah
❤️🧡💛💚💙💜💙💚💛🧡❤️🧡💛
We are ALL the SAME PERSON experiencing life in a BUNCH
OF DIFFERENT BODIES!!! which
means that EVERY PERSON
that you meet, is really just YOU...
LIVING IN ANOTHER BODY!!
you see..you are INTERACTING with YOURSELF at ALL TIMES!!! & once you understand this,you can achieve unity!
I and my father are one
Love Thy Neighbor as Thyself
💜💙💚💛🧡❤️🧡💛💚💙💜💙💚
Interesting. So how does the death of one body affect the living body?
if you are going to call people stupid you shouldn't contradict yourself and agree with the people you insult.
omg so many ads
This guy now says that AI is dangerous after decades of leading projects on AI. Why now not the time this presentation was done?
He ays himself he was surprised by the pace of the AI development. He thought these dangers would come much later and slower
LM21 was geht ab
Professor Hinton is amazing, but there are too many of these NN for dummies lectures. They're starting to clog the space. It's a shame he didn't say anything really interesting here.
True, finding advanced talks is getting pretty hard.
For advanced knowledge you dont come to youtube....you fucking read research papers
Sivaram Karanam and even that's not true you can search any ML topic on UA-cam and get a lecture as advanced as u like a whole series of them
Did you not read the title of this video? Clearly says foundations
@@pd.dataframe2833, not entirely true. Some of the best research papers are published and presented at symposiums such as this one. We are all very lucky that there are videos of these podium presentations so that we may also be inspired to learn.
Hello🦩🦩🦩,
God the Father loves you so much that He sent Holy, Sinless Jesus (His Holy Son) to earth to be born of a virgin. Then He grew up and died on a cross for our sins. He was in the tomb for 3 days, then Father God raised Holy and Sinless Jesus Christ (Y'shua) to life! He appeared to people and went back to Heaven. We must receive Sinless Jesus sincerely to be God's child.
John 1:12 says, "But as many as received him, to them gave he power to become the sons of God, even to them that believe on his name."
Will you receive Christ sincerely?
Some basic shit? Did I hear correctly?
Ok hinton them ok😂
Is a thought more than an image? Think about it.
Today, Losing your job to ai agents is unacceptable. Ai Jobloss is here. So are Ai as weapons. Can we please find a way to Cease Ai / GPT? Or begin Pausing Ai before it’s too late?
look at them, fatal wound/mark on their forehead
Who dislikes Hinton?
Well he shit on symbolists during the talk.
thank God I decided not to be a PhD student...
A spiteful professor who thinks his tinkering with computers does good for humanity! Talk about speech recognition, the machine voice announcing a caller on our phone system is always wrong.