- 11
- 62 836
Leximus Deep Learning
France
Приєднався 11 січ 2018
Channel focusing on Neural Networks technologies
Understanding Convolutional Neural Networks (CNN)
A video presentation on how to use CNN and what are the concepts behind.
Video about parallelization: ua-cam.com/video/woa34ugDSwY/v-deo.html
Errata :
- Padding is shown with the text "padding = 0" while it should be equals to 1 (this is why we have the larger white square around it)
- When talking about computation efficiency, I compare conv layers with dense layers. I say you get "1000 less parameters" when it is "1000 times less parameters"
Video about parallelization: ua-cam.com/video/woa34ugDSwY/v-deo.html
Errata :
- Padding is shown with the text "padding = 0" while it should be equals to 1 (this is why we have the larger white square around it)
- When talking about computation efficiency, I compare conv layers with dense layers. I say you get "1000 less parameters" when it is "1000 times less parameters"
Переглядів: 443
Відео
Conférence Introduction à l'Intelligence Artificielle ~ Université de Tours ~ Novembre 2019
Переглядів 3934 роки тому
Conférence d'Introduction à l'Intelligence Artificielle donnée à l'Université de Tours le 8 novembre 2019.
Why do we use matrices for neural networks?
Переглядів 16 тис.4 роки тому
This video was made as a support for the video series about genetic algorithms. 3blue1brown playlist: ua-cam.com/video/kjBOesZCoqc/v-deo.html Videos for Genetic Algorithm with Neural networks: ua-cam.com/video/XJu-ZzE3sUo/v-deo.html
2.4 A farm of snakes [Genetic Algorithm for Neural Networks - Tutorial with code]
Переглядів 3,5 тис.4 роки тому
We now set up the pool to breed snakes and use the evolutionary algorithm to generate better networks Principles of genetic algorithm: ua-cam.com/video/BCZt32L6Lx0/v-deo.html Github repro: github.com/alexandrelefourner/neural_networks_tutorial
2.3 The Neural Network [Genetic Algorithm for Neural Networks - Tutorial with code]
Переглядів 15 тис.4 роки тому
We set up the neural network used for the snake game. Link to the Matrices Tutorial: ua-cam.com/video/woa34ugDSwY/v-deo.html Github repro: github.com/alexandrelefourner/neural_networks_tutorial
2.2 Principles of Evolutionary algorithm
Переглядів 2 тис.5 років тому
Theoretical principles of Evolutionary algorithms for neural networks. Example of NEAT algorithm with MARIO : ua-cam.com/video/qv6UVOQ0F44/v-deo.html
2.1' Gym Exercice
Переглядів 1,1 тис.5 років тому
Small exercice for Gym Link to the Github: github.com/alexandrelefourner/neural_networks_tutorial
2.1 Principes de Gym [FR] [Réseaux de Neurones par Algorithme Génétique - Tutoriel avec code Python]
Переглядів 2,5 тис.5 років тому
Introduction à la librairie Gym pour les réseaux de neurones. La repro du Github est dispo ici: github.com/alexandrelefourner/neural_networks_tutorial_fr
1. Les bases des réseaux de neurones [FR]
Переглядів 4,2 тис.5 років тому
Vidéo traduite des principes de base des réseaux de neurones. Lien vers le github: github.com/alexandrelefourner/neural_networks_tutorial_fr Lien vers la vidéo anglaise : ua-cam.com/video/s5X64fmAqUo/v-deo.html
2.1 Gym basics [Genetic Algorithm for Neural Networks - Tutorial with code]
Переглядів 4,9 тис.5 років тому
Basic introduction of Gym which will be used for the Genetic Algorithm. Github repro is available here: github.com/alexandrelefourner/neural_networks_tutorial
1. Neural Network Basics [Tutorial with Python code]
Переглядів 13 тис.5 років тому
Welcome to this introduction to neural networks. Git repository for the exercices : github.com/alexandrelefourner/neural_networks_tutorial
❤❤
u managed to explain what I was struggling to understand for days within a minute. I cannot thank u enough🙏 god bless u
Kindly don't stop making informative vedios
amazing simple explanation. thank you!
i was wondering why not use einsteins summation notation
Revisiting this and very impressive how simple and concise of an explanation for the matrix multiplications you make. Great starting guide for anyone confused about the exact neuron operations
I'm am hav'zing le croissant
Awesome
So basically in case of a neural network+genetic algorithm working together, the NN does not backpropagate, instead the genetic algorithm is used to improve the weights of the NN. Also, in this case the genetic algorithm doesn’t recombine. Are these choices especially suitable for this kind of problem solving? Could it be possible to only use the NN and backpropagate to discourage snakes from hitting the walls? Many thanks, very interesting
Hi ! "Are these choices especially suitable for this kind of problem solving?" Not much. This was more an introduction video I made to show how an algorithm can learn from a reward. In the projects I work on or the online class I give, this is only a first step. To understand the structure of a neural network. Definitely backpropagating is the best approach. The question is : how ? This is the main course of discussion today about how we can train a neural network. Bell's equation is at the core of many algorithms, even if we tend more to use algorithm like PPO (proximal policy optimisation). The core idea is to use 2 neural networks, a first one to make a prediction and the second one to take this prediction + the environment as input. Doing so, the model can try to find what will be the reward (or the advantage). Since you know the reward, you can compare it with the neural network output and use the error that was made to backpropagate to both neural networks. Genetic Algorithm are interesting to know, but definitely suboptimal in many cases. I wish I had more time to make more videos about the topic !
salut j'ai un peu du mal avec le début, est ce que il faut qu'il y'a des appli déjà installé comme git ou
Oui, git est un prérequis :)
Nice
1:57 Amazing explanation. So the dot product is a log(n) algorithm when parallelized because of the summation of the products.
😮😮
Mattresses
Mr leximus is there continue this playlist because I only watch 2.4 where is the other videos sir ? btw, thank you so much for vids.
So how to back-propagate this matrix and calculate the gradients?
Thank you so much! This really cleared up a lot about matrices!
I think thats the best CNN video I've seen so far.
Very good work, the best video on the basics
I have an error who says: ImportError: cannot import name 'rendering' from 'gym.envs.classic_control' Does anyone can help me ?
Late answer, but for anyone facing this, it was fixed for me by downgrading the gym package to version 0.21.0. You can do that by using the command "pip install gym==0.21.0".
stp continut a faire des vidéos
parfait
Tooop
Where is the next vid hhhh
Great video keep it up!
At 23:29 on the screen it lists a line that says env.observation_space Both of which flag unknown variable env. And observation_space separate. But did I miss a declaration of env or is this just a noob question
Hi ! The env variable is declared as the following : env = gym.make('babysnek-raw-16-v1') You can find it in cell 3 of the complete notebook which I uploaded on my github: github.com/alexandrelefourner/neural_networks_tutorial/blob/master/2.%20Your%20first%20genetic%20algorithm%20solution.ipynb It should normally have been defined in the previous videos.
Have you tried creating a trading ai with genetic ? I'm asking because because i did. So i have a question if so :)
Trading with what?
Have you tried creating a trading ai with genetic+RL ? I'm asking because because i did. So i have a question if so :)
That's what I am working on, I set up a market simulator
@@chaseratliff8505 that is what im doing to
Super cours ! Merci beaucoup !
Simpler : the not_gate simply returns activate(1 - activate(entry))
Absolutely. The purpose of this cell was to show an easier approach to understand... but this 2 layers approach is what you would see in a conventional neural network
What Image Should I Comment On?
I'm on mac, but I cannot import sneks: No module named 'sneks'
get a windows... mac is for kardashians
With the xor_gate_pure_neuron exercise, I ended up doing: or_result = or_gate(entry_1, entry_2) and_result = and_gate(entry_1, entry_2) return activate(or_result - and_result) I didn't realise I was supposed to use wieghts but I guess it's pretty much the same thing
Indeed. It is just that your weights do not appear here. The main purpose of this exercice was to understand how things perform inside the black box to create an inference.
Can we set it for 3 input and 1 output
Of course, you just have to change the input matrix to be (3,x) and last by (xi,1) For a single layer, you can chose to do a simple (3,1) matrix.
Nice video 👍 It looks like it should be possible to just use copy.deepcopy(parentSnake) to create a new child
Nice short consice overview of evolution :) If there is anything i would have wanted extra i think it would have been where our Evolutionary algorithm differ from natural evolution. To show how Evolutionary algortihm take inspiration from natural evolution, but dont in any way have as a goal to mimic it (simply to make good solutions). Thus Evolutionary algorithms can differ from what we see in nature, like fixing the population size, imortal elite-individuals, niches/spiciation is sometimes skipped, many papers even dont include crossover.
Link to the Matrices Tutorial seems to have been set to private
Indeed... I updated it. The link is now a correct one.
C'est hyper bien cette série de vidéos. Dommage que t aie pas le temps de la finir.. mets nous le code sur github pour qu'on puisse finir de comprendre pourquoi ça fonctionne pas encore bien!
Pity you haven't published the end of the project, this tutorial was exactly what I'd been looking for :(
I have been quite busy lately (finishing my MSc in Big Data with a specialization in A.I.), and I wish to finish this series of videos. Since I recieved multiple requests about this, I will publish the final notebook on my GitHub in the next weeks, I think. :)
@@LeximusDeepLearning Great, thank you so much! ^^
Merci pour la video :) Quel est la difference entre une IA et un bots ?
Bonjour, Une intelligence artificielle (I.A.) est un programme informatique ayant effectué un apprentissage et ayant créé des instructions à partir de données. Une partie du code qui le compose est adaptative et passe par un processus d'entrainement qui le rend meilleur pour une tâche. Un bot (robot) est un programme informatique qui effectue une tâche. Il peut avoir appris à faire cette tâche (on parle ici d'I.A.), ou bien avoir l'intégralité des instructions écrites par un être humain (dans ce cas, il est entièrement composé d'intelligence humaine, il n'y a pas de création d'intelligence dans ce processus et on ne peut pas qualifier ça d'intelligence "artificielle").
Prenons le domaine du jeu vidéo : Un bot est programmé par l'homme. Il n'est pas capable d'innover. Il répète juste une série d'actions programmées. On pourrait voir un bot comme une série de conditions if, then, else. Une IA n'est pas programmée. C'est plutôt une structure de données (par exemple un réseau de neurones). Une IA est entraînée à partir de données de jeu (ex: apprentissage supervisé) ou de données de parties auxquelles elle a joué seule (ex: apprentissage par renforcement). On peut voir une IA comme un un vin que l'on ferait murir en l'exposant à des données.
Damn crystal clear . I feel no one is appreciating this work
Thank you very much. Deeply appreciated. This movie clip is necessary for starting deep learning. Please keep holding on creating more movie clips
I really enjoyed this series! When are the next videos coming?
Hi ! Glad you liked it ! Unfortunately, I've got a lot of work to do this times and it has gone even worst with Covid epidemic. I hope to finish it in the future months and if I don't by August, I think I will just put the solution notebook online. You can subscribe to get a notification when a new update will come.
Salut, tu comptes continuer en français ?
J'avais débuté ces vidéos en parallèle de conférences que je donnais sur l'I.A. mais je n'ai pas eu beaucoup de demandes francophones dans les derniers mois. Dans la mesure où la population qui s'y est intéressée jusqu'alors est principalement anglophone, j'ai préféré pousser un peu plus du côté de cette autre langue. Si plus de français s'intéressent à ce type de contenu, c'est tout à fait envisageable :)
@@LeximusDeepLearning Ok, d'accord. Ou sinon par écrit ? Après si t'as pas trop le temps et envie le fait pas. ;)
@@EmpereurIV Je peux éventuellement mettre le notebook complet en ligne, avec toutes les explications. (je dois aussi l'éditer de toute façon car l’environnement a changé et il faut désormais prendre une branche précédente du git pour avoir un code fonctionnel).
@@LeximusDeepLearning ok, nice !
fatal: destination path 'Sneks' already exists and is not an empty directory. DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at pip.pypa.io/en/latest/development/release-process/#python-2-support Defaulting to user installation because normal site-packages is not writeable Obtaining file:///Users/andriussteponavicius/neural_networks_tutorial/Sneks ERROR: Command errored out with exit status 1: command: /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/andriussteponavicius/neural_networks_tutorial/Sneks/setup.py'"'"'; __file__='"'"'/Users/andriussteponavicius/neural_networks_tutorial/Sneks/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"' '"'"', '"'"' '"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info cwd: /Users/andriussteponavicius/neural_networks_tutorial/Sneks/ Complete output (6 lines): running egg_info writing requirements to sneks.egg-info/requires.txt writing sneks.egg-info/PKG-INFO writing top-level names to sneks.egg-info/top_level.txt writing dependency_links to sneks.egg-info/dependency_links.txt error: package directory 'sneks' does not exist ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
As the error explains, the code written is not made to work with Python 2.7 Everything was made for Python 3.6+
I get ImportError Traceback (most recent call last) <ipython-input-2-291765ad320c> in <module>() ----> 1 from lexml.lexmlexercices import * #library required to test what you have. ImportError: No module named lexml.lexmlexercices
Please git clone the directory, as mentionned in the video :)
NameError in first cell NameError: name 'test_activation' is not defined
Hello! It's likely you only took the notebook but did not download the required library you need. It is available at this link: github.com/alexandrelefourner/neural_networks_tutorial/tree/master/lexml My advice would be to install git and execute the command as follow. Or go to the link "github.com/alexandrelefourner/neural_networks_tutorial" then click on "Clone or Download", next "download as zip". This way, you have everything you need for the tutorial :)
Awesome work! What tool do you use for animation?
Hello Sebastian! I did everything using powerpoint :)
@@LeximusDeepLearning Wow! Most impressive.
"Hey, Where Are The French Subtitles"!"🤣
I don't use french subtitles. The video is translated here : ua-cam.com/video/s5X64fmAqUo/v-deo.html