Good talk. I like that he took the time to adapt his presentation for Austria. What's interesting to me is that many of the concepts he presented are based on very simple mechanisms. I have no doubt that it gets more complicated when you delve a bit deeper into the subject but the core functions are all very short yet give accurate results when combined with large amounts of data.
Note that DeepMind's Deep Q-Learning code that plays Atari Breakout (and many more) runs quite well on a consumer-level GPU at home. :) I've added a configuration file that enables you to run it on a 1.5GB card. Details and code: ua-cam.com/video/V1eYniJ0Rnk/v-deo.html
At 39:00 Norvig may have mistakenly claimed that GoogLeNet trained on unlabeled data for ILSVRC 2014. The ILSVRC 2014 dataset is the same as the ILSVRC 2012 dataset, and the training corpus for that dataset is definitely labeled. image-net.org/challenges/LSVRC/2012/ It would be pretty groundbreaking if Google had designed a model that could take images as input and give textual labels as output without ever being exposed to text. Still remarkable that GoogLeNet was so effective.
The game-playing stuff is very cool. If you'd like to see another example of a reinforcement-trained AI playing a game, trained from just the score and its view, check out aiseedo.com/demos/cookiemonster/ - every time you open the webpage, you get your own cloud-based AI playing the game in your browser!
Do yourself a favor and skip forward to 10:55
Do yourself a favor and skip forward to 10:55
Good talk. I like that he took the time to adapt his presentation for Austria. What's interesting to me is that many of the concepts he presented are based on very simple mechanisms. I have no doubt that it gets more complicated when you delve a bit deeper into the subject but the core functions are all very short yet give accurate results when combined with large amounts of data.
Note that DeepMind's Deep Q-Learning code that plays Atari Breakout (and many more) runs quite well on a consumer-level GPU at home. :) I've added a configuration file that enables you to run it on a 1.5GB card. Details and code: ua-cam.com/video/V1eYniJ0Rnk/v-deo.html
At 39:00 Norvig may have mistakenly claimed that GoogLeNet trained on unlabeled data for ILSVRC 2014. The ILSVRC 2014 dataset is the same as the ILSVRC 2012 dataset, and the training corpus for that dataset is definitely labeled.
image-net.org/challenges/LSVRC/2012/
It would be pretty groundbreaking if Google had designed a model that could take images as input and give textual labels as output without ever being exposed to text. Still remarkable that GoogLeNet was so effective.
Machine learning: "the computer learns to write the program that we, as programers aren't smart enough to do" - Peter Norvig
@Károly Zsolnai - now the link is gone! As well as your original comment. You probably edited your original comment, overwriting it.
Károly Zsolnai - where is the code? your link just comes back here? is it that a recursion joke??
You are indeed right, it was the wrong link. Now it's fixed, thanks! :)
I always love AI mistakes, maybe you do too, so here's a timestamp of the funny classification mistakes 41:10
What happens to computer scientists that suddenly become rich and famous? They become complacent and retire.
The game-playing stuff is very cool. If you'd like to see another example of a reinforcement-trained AI playing a game, trained from just the score and its view, check out aiseedo.com/demos/cookiemonster/ - every time you open the webpage, you get your own cloud-based AI playing the game in your browser!
ahhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh