Great results from these simple heuristics. I think the biggest point you overlooked for the inputs was to include the distance to the food in each of the four directions. Instead of a simple 0 or 1 value for whether the food exists in that direction, replace that value with the normalized distance to the food in each direction. The closer the food column is to the snake's head column, the closer to 1 that value is for the x direction facing the food. And likewise, the closer the food row is to the snake's head row, the closer to 1 that value is for the y direction facing the food. The 0 value still means the same thing, that there's no food in that direction, basically infinite distance from snake head to food position, but now that the value isn't just 1, but a normalized value between 0 and 1, you won't get such spiky behavior when the snake's head crosses the x or y position of the food, and the snake will always have knowledge of how far away it is from the food, just like the player has knowledge of the distance the snake is from the food. My intuition is that this simple change could produce some pretty "NEAT" strategies :) Let me know if you end up trying this solution and whether it produces better results. I'm not smart enough to code it up on my own.
Thanks Peter. I have actually tried many variations with different kinds of inputs before publishing the video, one of them being the distance to the food (encoded as you suggested). Unfortunately, I didn't get much better results when training the network - some comments suggest that this is due to overfitting, and I should be training the network by generating random states that include long snakes, and reward it for each eaten food. I'm keen to try this at some point.
@@TechWithNikola I took it upon myself to try this using my own crude python implementation and ended up finding some genomes that were able to seek out food very quickly by adding the proximity of the head to the food into the fitness function, so that the snake not only fits for a high score, but also sums the average proximity of head to food to the score. This gave me a lot more aggressive behavior for food seeking behavior, but I weighted it too much in relation to the score, because it will move towards food regardless of whether or not it needs to move through its own body to get to the food. So I was able to find some genomes that achieved scores of 65 on a 50x50 grid but it would only be because the food did not have the snake's body in the way, and 37 on a 10x10 grid was my highest score with a max step of 200 per game. I noticed that if i gave the snake more steps to train before getting cut off, even at 1000 steps, it would run into itself before the step limit and end the game. I think moving out of its own way to get to food will require an even more complex input embedding, complete with free spaces to move towards, so that it knows to leave room for itself to escape from a dead end. I'm not sure how to get much better results than that without increasing the input space by a lot, and thereby increasing the size of the neural net, requiring much longer training and much tighter mutation rates. I don't have the compute power or time to give this a try, but it's a fun experiment to run. Thanks for introducing this idea to me, it was a nice recommendation from the youtube algorithm after diving into a few machine learning videos recently.
Ive been trying to undestand NEAT for soooo many years now...i understood the concept but could not understand how to encode/evolve the networks. Thanks for that video, great explanation on the core concepts of NEAT. Also the way you present code is nice on the eyes...good fontsize and tweening, nice editiing.
Thanks for the comment. I’m glad you’ve liked it. I ran into the same problem while I was trying to understand NEAT. It took me a long time, so I decided to share what I’ve learned.
i love your content :) And a suggestion for how to make the algorithm work: Dont start each Training with a short snake, randomize the starting snake length and position (or save snapshots of a snake every now and then and use them as new starting pos in next generations) That way you can still train without running each generation until death but you get the benefit of it training with longer and longer snakes. Another benefit is it wont overfit the algorithm for short snakes :)
I’m glad :-) That makes sense. Thanks for the suggestion. I’ll give it a go. I just didn’t understand one part: “without running each snake until death”. When would you stop the training? What would the fitness function look like?
@@TechWithNikolafor the first que. When you stop the training? You should just take no. for score like 130 when you hit stop the trainning and then you increase your bar like hit to 150 score and stop the training (end of the game like level 1 , level 2 cleared) For the Fitness function Here you start your snake size with 3 and score is 0 then you start with snake size like 10 or 20 and score zero and you can adjust the score when you stop training by reducing it Your this approach is good for train short snake again and again(overfitting with short snake) so ai has knowledge about how to play with short snake but when it become longer AI has very less knowledge about how to play with long snake compare to short snake so first you train with short snake then you train with long snake (you should try like level 1 when snake hit the size 20 level 1 cleared then increase snake size and go for level 2 and hit that mark and so on so like I mentioned above)
Suggestion: Chrispresso and Ezra Anderson both managed to code snakes that solve the game by using inputs in all 8 directions including diagonals, instead of just 4, plus the tail's current direction. Since NEAT seems to struggle with lots of neurons, I'd add hacks like unifying the wall, food, and body inputs into +distance values for food and -distance values for body and wall. I'd also map the inputs relative to the head direction instead of using absolute map directions, so the network doesn't have to mutate the same function 4 or 8 times. Plus I could add the ratio between current length to maximum size, since strategies change as the snake gets longer.
What a video! Well presented and beautifully illustrated to the point even I could follow along. This channel will blow up, I predict -- I'll be sure to recommend your videos!
Love how genetics background in living organisms influence and inspire the technological solutions, programming included! It was insightful to see reasoning behind analysis of trial and error approach to training the snake. Well done 😄
I'm so glad that you've enjoyed it Sara, and thank you for the kind words :) Indeed, it's amazing how much of real-life ideas are applied to computer science.
Great video, thx. I second the suggestion about starting with random length snake. My first approach would be also drastically increasing population size and mutation rate. Also dynamic management of hyper parameters so that you always have a fixed amount of species no matter the fitness and stagnation. I’m obsessed with neat and I’m learning python just for this reason now 😅
Thanks, I'm glad you've liked the video! Thanks for the suggestions too. That makes sense. I will try the proposed updates at some point. I'm curious to see how that performs :)
Some ideas: 1. Double the input neuron count + 1, feed it the previous "frame" data as a kind of memory and the extra neuron you can use to input a value (lets say 1) for the first frame and another value (lets say zero) for all the next frames, this way the snake may learn to ignore the memory values for the first frame. 2. feed it the entire board, that's how we play snake, we know everything that is in the board at all times 3. feed it the entire board with the memory thing from idea one
It can be highly improved. I had the same issue as you and i struggled for months (ofc on and off) to make it actually learn. What actually worked was another meta genetic algorithm which optimized the hyperparameters of mutation (rate and power) for the neat algorithm, by running 200 trainings in parallel up to a certain generation number. Random search or grid search were also pretty good. Just gotta try random mutation rates and powers and re-start the training many times to go on the right learning path.
Excellent, I really liked the approach, one of the common pitfalls is premature convergence of the population, there are a few tricks, like carefully managing the selection pressure and ensuring that you have it as constant as possible, then the choice of your operators are crucial as mutation /crossover operators with finite state limits your capacity of having a diverse genetic pool, if you suffer from premature convergence operators with infinite states should be preferred.
Thank you, I'm glad you've enjoyed it :) Also, thanks for taking the time to suggest improvements. I'll porobably play a bit more with the configs at some point. What are the operators with infinite states?
@@TechWithNikola you can look in this document page 154 it is a bit old but should to the job, there might be more modern implemejntation now document:dspace.lib.cranfield.ac.uk/bitstream/handle/1826/93/J-M.Roger2003.pdf;sequence=2
Quick question, why do you implement it with the restriction that it cant have cycles? The original Neat paper never mentions that it has to be a feed forward network. This is a bit random, but about a year ago i compared how different Neat implementations handle this. Some enforce an asyclic graph, and move through the network one layer at a time. Some implementations have no consept of layers, and simply updates the network N times, where n is the minimum/maximum number of neurons that has to be visited before reaching the output. The original paper is not clear on how to deal with this, and so many people have made different solutions. I Kind of have to ask, how/why did you decide to not have the mutations add cycles?
That's a great question. I have spent a lot of time trying to understand how original neat paper handles cycles, and as you've said it simply updates the network N times. I think the choice of using RNN (Recurrent Neural Networks) or a multilayer percepton networks shouldn't be coupled with how the network is trained. I see NEAT as a way to train the network, similar to backpropagation. My current implementation can work with RNNs as well, but I haven't implemented it - it wouldn't be too hard to add this functionality. As for why I decided to go with layered network for this project, I just thought it was easier to implement. I also have a better understanding of such networks (i know very little about RNNs), so I didn't want to diverge from that. Given that this is the first time I'm implementing NEAT, layered NNs made debugging easier.
@@TechWithNikola It would be interesting to see if we do add cycles. A real brain network does have cyclic motifs and they play may also an important role here. And maybe we do need to separate the evolution of network structures and the training of each network. Another thought is that maybe the mutation is not drastic enough, a more drastic mutation (say a huge change of network structure every "decades") may help to get out of the local minima.
Nit pick ahead, while resolving the network (feeding inputs and getting outputs) u remove the un connected neurones, i think that might be wrong as that unconnected neuron still has an output (because of the bias) and will affect the output
Yep, agreed. However, I don’t know if that matters much in practice? My feeling is that it doesn’t because that’s just a way to interpret the genotype, and if it was relevant I’d hope that other nodes would appear over time. I may be wrong though.
Love the video! Definitely agree with others that it would be very interesting to see more from you in regards to neat. Im sure you might have thought about it already or already tried implementing it, but different kinds of input as well as different kinds of fitness functions can be ways to "push" your model into a different behaviors. Im still learning as well with trying to train my own model, but with being careful to not overfit for certain behaviors that may be a result of survivorship bias, giving rewards or penalties for desired/undesired outcomes can be a way to get past or flat out avoid unintended behavior loops, For instance, record how long the snake as been X distance from any wall && how many moves its been since its last score, and if it reaches a certain threshold then penalize it. There are already some obvious problems with that solution, but just an idea off the top of my head.
Thank you :-) I’d like to get back go this project at some point. There are a few ideas that I’d like to try. Maybe even experiment with other ML techniques. I think that one of the downsides of my current model is that it overfits was short snakes, so I’d like to try different fitness function. Maybe generate longer snakes and instead of scoring snake length, try to score hoe many times it ate food.
This was extremely helpful! My only thought on the snake was maybe you can scale the brain conditions "vision" under the different criteria. Early game should be food and avoid wall. Later game should be avoid yourself first and then food second. Maybe you can weight the inputs based on the length of the snake?
Glad you've liked it. Yeah, a few people have suggested ways to train the snake, starting with levels, or more generally, generating random-length snake and rewarding it based on how many times it ate food, rather than length. I might try this at some point.
Ideas to improve: Add more "rays" of decting food, walls and body unsetad of the 3 that it has now, merge the wall and body rays to the same (effectively making it detect body as wall), making the snake know the (relative to his head and direction) position of the apple, and tell it how close it is to the wall in the direction that it is closer. Another Snake Game AI video series (in portuguese) Part 1 (AI): ua-cam.com/video/awz1ghokP3k/v-deo.html Part 2 (Monte Carlo): ua-cam.com/video/S6p7NJUxnOo/v-deo.html Part 3 (dijkstra pathfinder): ua-cam.com/video/Vii9XiQ8bec/v-deo.html
Thanks. I will watch these at some point. I do want to get back to this project again, and give it another go. FWIW, I have tried all of those suggestions without much progress. I think the problem is in overfitting the solution for short snakes.
The video and explaination is fascinating.I have one question.Do you code these illustrations (graphs, curves) or are they made in video editing softwares?
Indeed. It was a lot of work to research the topic, a lot to implement it, and also a lot to make a video :-) i’m glad when people appreciate it, so thank you taking the time to comment!
Hi! nice video, I'm really hoping to learn much more about ai learning. I've coded in python (for almost a year, but done lots of projects), and i'm actually learning C# in hope of learning to work with Unity, and I'm really interested in the ai world. Where can I find the official NEAT documentation shown in the video? Thanks in advance!!
Hi, thank you! AI is fun. The official NEAT whitepaper is here: nn.cs.utexas.edu/downloads/papers/stanley.cec02.pdf Just a heads up, it's not a very detailed paper and I've had to figure out a lot of details on my own and by reading lots of source code.
3:12, looking at NEAT as simply a network training algorithm can miss a bit of nuance. NEAT optimizes a *topology* and *parameters* (weights), while Backpropagation and many other optimization routines only optimize *parameters.* So you could use NEAT as a topology generator, then fine-tune the network using Backpropagation or an STDP method if you're into SNNs.
With Neural Nets, I struggle on back propagation. I can do it on paper and all, but implementing it programmatically... Have you got tips on how I could approach this?
Hi, apologies for the late response. It's been a very long time since I've looked at or implemented backpropagation. If you can do it by hand but you struggle to convert this to code, maybe I can try and help. Can you provide more information on what is troubling you? Have you tried representing the network as a matrix or vectors? If so, then it shouldn't be too hard to do this if you define operators such as dot product on the vector (multiplying each field with the corresponding field in another vector).
Ohh, that sounds like a better idea. I tend to OOP everything. What I'd do is try represent everything as an object, from the nodes all the way to the layers. Thanks for the reply@@TechWithNikola
I would like to produce videos about programming, would it be possible to share some tips, such as how to assemble the graphics, the code part, and the formula animations?
Hey, sure. I use a combination of Manim, powerpoint, keynote for code animations, adobe premiere for editing. I make formulas in powerpoint or Manim depending on the complexity.
Hi, source code is currently available for Pateron supporters: www.patreon.com/TechWithNikola I will probably make the neat library public in a couple of months.
Give it an X,Y location of the food. You are uneccesarily restricting it's vision. Think of the players vision, not the snakes vision. Also give it the locations of its body. Give it ALL the information that the player has access to. You're not evolving a snake: You're evolving the players ability to play the game.
The input would just be a 4-bit bitmap: food?, empty?, head? , body?. Along with an extra 4-bit value to store the current direction. (as the bitmap will lose this)
Great results from these simple heuristics. I think the biggest point you overlooked for the inputs was to include the distance to the food in each of the four directions. Instead of a simple 0 or 1 value for whether the food exists in that direction, replace that value with the normalized distance to the food in each direction. The closer the food column is to the snake's head column, the closer to 1 that value is for the x direction facing the food. And likewise, the closer the food row is to the snake's head row, the closer to 1 that value is for the y direction facing the food. The 0 value still means the same thing, that there's no food in that direction, basically infinite distance from snake head to food position, but now that the value isn't just 1, but a normalized value between 0 and 1, you won't get such spiky behavior when the snake's head crosses the x or y position of the food, and the snake will always have knowledge of how far away it is from the food, just like the player has knowledge of the distance the snake is from the food. My intuition is that this simple change could produce some pretty "NEAT" strategies :) Let me know if you end up trying this solution and whether it produces better results. I'm not smart enough to code it up on my own.
Thanks Peter. I have actually tried many variations with different kinds of inputs before publishing the video, one of them being the distance to the food (encoded as you suggested). Unfortunately, I didn't get much better results when training the network - some comments suggest that this is due to overfitting, and I should be training the network by generating random states that include long snakes, and reward it for each eaten food. I'm keen to try this at some point.
@@TechWithNikola I took it upon myself to try this using my own crude python implementation and ended up finding some genomes that were able to seek out food very quickly by adding the proximity of the head to the food into the fitness function, so that the snake not only fits for a high score, but also sums the average proximity of head to food to the score. This gave me a lot more aggressive behavior for food seeking behavior, but I weighted it too much in relation to the score, because it will move towards food regardless of whether or not it needs to move through its own body to get to the food. So I was able to find some genomes that achieved scores of 65 on a 50x50 grid but it would only be because the food did not have the snake's body in the way, and 37 on a 10x10 grid was my highest score with a max step of 200 per game. I noticed that if i gave the snake more steps to train before getting cut off, even at 1000 steps, it would run into itself before the step limit and end the game. I think moving out of its own way to get to food will require an even more complex input embedding, complete with free spaces to move towards, so that it knows to leave room for itself to escape from a dead end. I'm not sure how to get much better results than that without increasing the input space by a lot, and thereby increasing the size of the neural net, requiring much longer training and much tighter mutation rates. I don't have the compute power or time to give this a try, but it's a fun experiment to run. Thanks for introducing this idea to me, it was a nice recommendation from the youtube algorithm after diving into a few machine learning videos recently.
Ive been trying to undestand NEAT for soooo many years now...i understood the concept but could not understand how to encode/evolve the networks. Thanks for that video, great explanation on the core concepts of NEAT. Also the way you present code is nice on the eyes...good fontsize and tweening, nice editiing.
Thanks for the comment. I’m glad you’ve liked it. I ran into the same problem while I was trying to understand NEAT. It took me a long time, so I decided to share what I’ve learned.
i love your content :) And a suggestion for how to make the algorithm work: Dont start each Training with a short snake, randomize the starting snake length and position (or save snapshots of a snake every now and then and use them as new starting pos in next generations) That way you can still train without running each generation until death but you get the benefit of it training with longer and longer snakes. Another benefit is it wont overfit the algorithm for short snakes :)
I’m glad :-)
That makes sense. Thanks for the suggestion. I’ll give it a go.
I just didn’t understand one part: “without running each snake until death”. When would you stop the training? What would the fitness function look like?
@@TechWithNikolafor the first que. When you stop the training?
You should just take no. for score like 130 when you hit stop the trainning and then you increase your bar like hit to 150 score and stop the training (end of the game like level 1 , level 2 cleared)
For the Fitness function
Here you start your snake size with 3 and score is 0 then you start with snake size like 10 or 20 and score zero and you can adjust the score when you stop training by reducing it
Your this approach is good for train short snake again and again(overfitting with short snake) so ai has knowledge about how to play with short snake but when it become longer AI has very less knowledge about how to play with long snake compare to short snake so first you train with short snake then you train with long snake (you should try like level 1 when snake hit the size 20 level 1 cleared then increase snake size and go for level 2 and hit that mark and so on so like I mentioned above)
Suggestion: Chrispresso and Ezra Anderson both managed to code snakes that solve the game by using inputs in all 8 directions including diagonals, instead of just 4, plus the tail's current direction.
Since NEAT seems to struggle with lots of neurons, I'd add hacks like unifying the wall, food, and body inputs into +distance values for food and -distance values for body and wall. I'd also map the inputs relative to the head direction instead of using absolute map directions, so the network doesn't have to mutate the same function 4 or 8 times. Plus I could add the ratio between current length to maximum size, since strategies change as the snake gets longer.
What a video! Well presented and beautifully illustrated to the point even I could follow along. This channel will blow up, I predict -- I'll be sure to recommend your videos!
I’m so glad you’ve liked it! Thanks a lot for recommending my channel, it will help a lot with the growth.
Love how genetics background in living organisms influence and inspire the technological solutions, programming included! It was insightful to see reasoning behind analysis of trial and error approach to training the snake. Well done 😄
I'm so glad that you've enjoyed it Sara, and thank you for the kind words :)
Indeed, it's amazing how much of real-life ideas are applied to computer science.
Extremely detailed and high quality NEAT video! Great job!!
Thanks a lot! I'm so happy to hear you've enjoyed it.
2:23 my brain itches thinking how you implemented the dot following the current point in a curve in visualization. that was probably a challenge
I used an animation library called manim
@@TechWithNikola oh right libraries exist. i forgot
Please continue making more videos on NEAT, such a fascinating topic!
Thank you! I will consider more neat videos in the future if there’s enough interest.
Great video, thx. I second the suggestion about starting with random length snake. My first approach would be also drastically increasing population size and mutation rate. Also dynamic management of hyper parameters so that you always have a fixed amount of species no matter the fitness and stagnation. I’m obsessed with neat and I’m learning python just for this reason now 😅
Thanks, I'm glad you've liked the video!
Thanks for the suggestions too. That makes sense. I will try the proposed updates at some point. I'm curious to see how that performs :)
Some ideas:
1. Double the input neuron count + 1, feed it the previous "frame" data as a kind of memory and the extra neuron you can use to input a value (lets say 1) for the first frame and another value (lets say zero) for all the next frames, this way the snake may learn to ignore the memory values for the first frame.
2. feed it the entire board, that's how we play snake, we know everything that is in the board at all times
3. feed it the entire board with the memory thing from idea one
Thank you! I was searching exactly for this kind of knowledge!❤
You're welcome! I'm so glad. :)
It can be highly improved. I had the same issue as you and i struggled for months (ofc on and off) to make it actually learn. What actually worked was another meta genetic algorithm which optimized the hyperparameters of mutation (rate and power) for the neat algorithm, by running 200 trainings in parallel up to a certain generation number.
Random search or grid search were also pretty good. Just gotta try random mutation rates and powers and re-start the training many times to go on the right learning path.
That’s very interesting. Thanks a lot for the suggestion. I will give this a go at some point.
Love your video, I would even try it following your example as a practice with neat
Thank you Danya. I’m very happy to hear that you’ve loved it.
Very interesting, we need more videos like these.
Thank you! I’ll try to make more videos like this going forward.
Excellent, I really liked the approach, one of the common pitfalls is premature convergence of the population, there are a few tricks, like carefully managing the selection pressure and ensuring that you have it as constant as possible, then the choice of your operators are crucial as mutation /crossover operators with finite state limits your capacity of having a diverse genetic pool, if you suffer from premature convergence operators with infinite states should be preferred.
Thank you, I'm glad you've enjoyed it :)
Also, thanks for taking the time to suggest improvements. I'll porobably play a bit more with the configs at some point. What are the operators with infinite states?
@@TechWithNikola you can look in this document page 154 it is a bit old but should to the job, there might be more modern implemejntation now document:dspace.lib.cranfield.ac.uk/bitstream/handle/1826/93/J-M.Roger2003.pdf;sequence=2
Quick question, why do you implement it with the restriction that it cant have cycles? The original Neat paper never mentions that it has to be a feed forward network.
This is a bit random, but about a year ago i compared how different Neat implementations handle this. Some enforce an asyclic graph, and move through the network one layer at a time. Some implementations have no consept of layers, and simply updates the network N times, where n is the minimum/maximum number of neurons that has to be visited before reaching the output.
The original paper is not clear on how to deal with this, and so many people have made different solutions. I Kind of have to ask, how/why did you decide to not have the mutations add cycles?
That's a great question. I have spent a lot of time trying to understand how original neat paper handles cycles, and as you've said it simply updates the network N times.
I think the choice of using RNN (Recurrent Neural Networks) or a multilayer percepton networks shouldn't be coupled with how the network is trained. I see NEAT as a way to train the network, similar to backpropagation. My current implementation can work with RNNs as well, but I haven't implemented it - it wouldn't be too hard to add this functionality.
As for why I decided to go with layered network for this project, I just thought it was easier to implement. I also have a better understanding of such networks (i know very little about RNNs), so I didn't want to diverge from that. Given that this is the first time I'm implementing NEAT, layered NNs made debugging easier.
@@TechWithNikola thanks for answer 😃 Also congrats on a great video 👍
@@TechWithNikola It would be interesting to see if we do add cycles. A real brain network does have cyclic motifs and they play may also an important role here. And maybe we do need to separate the evolution of network structures and the training of each network. Another thought is that maybe the mutation is not drastic enough, a more drastic mutation (say a huge change of network structure every "decades") may help to get out of the local minima.
Nit pick ahead, while resolving the network (feeding inputs and getting outputs) u remove the un connected neurones, i think that might be wrong as that unconnected neuron still has an output (because of the bias) and will affect the output
Yep, agreed. However, I don’t know if that matters much in practice?
My feeling is that it doesn’t because that’s just a way to interpret the genotype, and if it was relevant I’d hope that other nodes would appear over time. I may be wrong though.
Love the video! Definitely agree with others that it would be very interesting to see more from you in regards to neat.
Im sure you might have thought about it already or already tried implementing it, but different kinds of input as well as different kinds of fitness functions can be ways to "push" your model into a different behaviors. Im still learning as well with trying to train my own model, but with being careful to not overfit for certain behaviors that may be a result of survivorship bias, giving rewards or penalties for desired/undesired outcomes can be a way to get past or flat out avoid unintended behavior loops, For instance, record how long the snake as been X distance from any wall && how many moves its been since its last score, and if it reaches a certain threshold then penalize it. There are already some obvious problems with that solution, but just an idea off the top of my head.
Thank you :-) I’d like to get back go this project at some point. There are a few ideas that I’d like to try. Maybe even experiment with other ML techniques.
I think that one of the downsides of my current model is that it overfits was short snakes, so I’d like to try different fitness function. Maybe generate longer snakes and instead of scoring snake length, try to score hoe many times it ate food.
Now I appreciate Machine Learning more than ever before because of this
This was extremely helpful! My only thought on the snake was maybe you can scale the brain conditions "vision" under the different criteria. Early game should be food and avoid wall. Later game should be avoid yourself first and then food second. Maybe you can weight the inputs based on the length of the snake?
Glad you've liked it. Yeah, a few people have suggested ways to train the snake, starting with levels, or more generally, generating random-length snake and rewarding it based on how many times it ate food, rather than length. I might try this at some point.
Ideas to improve: Add more "rays" of decting food, walls and body unsetad of the 3 that it has now, merge the wall and body rays to the same (effectively making it detect body as wall), making the snake know the (relative to his head and direction) position of the apple, and tell it how close it is to the wall in the direction that it is closer.
Another Snake Game AI video series (in portuguese)
Part 1 (AI): ua-cam.com/video/awz1ghokP3k/v-deo.html
Part 2 (Monte Carlo): ua-cam.com/video/S6p7NJUxnOo/v-deo.html
Part 3 (dijkstra pathfinder): ua-cam.com/video/Vii9XiQ8bec/v-deo.html
Thanks. I will watch these at some point. I do want to get back to this project again, and give it another go.
FWIW, I have tried all of those suggestions without much progress. I think the problem is in overfitting the solution for short snakes.
putting src behind patreon is laaaaame
quality explanations, and editing is on POINT
Thank you!
Am thinking of making the metwork to take the entire screen and express everything as a strength from 0 to 1
The video and explaination is fascinating.I have one question.Do you code these illustrations (graphs, curves) or are they made in video editing softwares?
Thank you! I code most of them yeah, then edit it later. I use manim, powerpoint and adobe premiere to do that.
lots of work in this video, thank you!
Indeed. It was a lot of work to research the topic, a lot to implement it, and also a lot to make a video :-) i’m glad when people appreciate it, so thank you taking the time to comment!
Great video, One question: what is the song that start at 26:26? Thank you so much.
Thank you. It's called "Matrika - Lazer Beam"
That's pretty NEAT
Where is the rest?
Cool video. One question, rather offtopic: how do you animate code changes?
Thanks. I'm using Keynote on MacOS for code animation.
Hi! nice video, I'm really hoping to learn much more about ai learning. I've coded in python (for almost a year, but done lots of projects), and i'm actually learning C# in hope of learning to work with Unity, and I'm really interested in the ai world. Where can I find the official NEAT documentation shown in the video? Thanks in advance!!
Hi, thank you! AI is fun.
The official NEAT whitepaper is here: nn.cs.utexas.edu/downloads/papers/stanley.cec02.pdf
Just a heads up, it's not a very detailed paper and I've had to figure out a lot of details on my own and by reading lots of source code.
nice content man good job!
Thanks man!
9:16 how do you detect a cycle?
You didi it again, to good to be true. Are you an AI? :D
Haha thanks. Not an AI (but, would an AI say that?) 😀
3:12, looking at NEAT as simply a network training algorithm can miss a bit of nuance. NEAT optimizes a *topology* and *parameters* (weights), while Backpropagation and many other optimization routines only optimize *parameters.* So you could use NEAT as a topology generator, then fine-tune the network using Backpropagation or an STDP method if you're into SNNs.
With Neural Nets, I struggle on back propagation. I can do it on paper and all, but implementing it programmatically... Have you got tips on how I could approach this?
Hi, apologies for the late response. It's been a very long time since I've looked at or implemented backpropagation. If you can do it by hand but you struggle to convert this to code, maybe I can try and help. Can you provide more information on what is troubling you?
Have you tried representing the network as a matrix or vectors? If so, then it shouldn't be too hard to do this if you define operators such as dot product on the vector (multiplying each field with the corresponding field in another vector).
Ohh, that sounds like a better idea. I tend to OOP everything. What I'd do is try represent everything as an object, from the nodes all the way to the layers. Thanks for the reply@@TechWithNikola
Aren’t they known as perceptrons?
I assume you're referring to what I called neurons. Correct, they are known as perceptrons in mathematical models.
This might be a dumb question, but I'm just starting out. Can this be done on Python? I've never used C++, and I'm still trying to lean Python.
Hello, yes this can be done in python as well. In fact, python already has python-neat library that you can use out-of-box.
I would like to produce videos about programming, would it be possible to share some tips, such as how to assemble the graphics, the code part, and the formula animations?
Hey, sure. I use a combination of Manim, powerpoint, keynote for code animations, adobe premiere for editing. I make formulas in powerpoint or Manim depending on the complexity.
@@TechWithNikola thank you very much for the tips
Neat!
Where can one find the associated code?
Hi, source code is currently available for Pateron supporters: www.patreon.com/TechWithNikola
I will probably make the neat library public in a couple of months.
Did this ever get published?
22:37 the one that got -.75
Yeah, I have used function smoothness for better rendering which sometimes adds noise. The more likely result here was 0.
Give it an X,Y location of the food. You are uneccesarily restricting it's vision. Think of the players vision, not the snakes vision. Also give it the locations of its body. Give it ALL the information that the player has access to. You're not evolving a snake: You're evolving the players ability to play the game.
The input would just be a 4-bit bitmap: food?, empty?, head? , body?. Along with an extra 4-bit value to store the current direction. (as the bitmap will lose this)