Wow, it's been year since the last video !? Please keep 'em coming! I could swear that you would show the NEAT algorithm in the next video ? Please do it !
Speaking of neuroevolution maze solver, would you test a maze that doesn't "cheat" by making the longest(highest entropy) path also the correct path(like the example maze in some novelty search papers)? Something like the deceptive tartarus environment perhaps? Though I reckon it might be a tough challenge better saved for later(or even bleeding edge AI research). Having multiple longest paths with only 1 correct answer might be a simpler approach
Very interesting thoughts! I'll be looking into simpler approaches first, as they are easier to explain, but I'll certainly want to revisit more advanced ideas in the future!
I’m curious how to implement this problem with the visualisations. Seeing is believing, and I’d love to make something like this to begin to comprehend it. Do you have any recommendations or planned tutorials on how we could create this maze problem? Thanks. Subscribed!
really interesting, looking forward to jump on the code,. It would be nice to have it in python instead of java but, any one has his own preferences! thanks for the video!
Well explained. Also, the local maximum problem could be solved by using BFS to compute the distance of every legal maze square to the exit, and using that distance as the fitness function. Right?
Representation in a maze is harder than you initially think. I'm having trouble trying to do this. Basically how do you know how many genomes/moves an individual should have? Since they can't really ever grow in genome/move count?
While it’s a little tricky to grow move count, it’s not so difficult to lower it. For example, if an individual solves the maze with moves to spare, we can just ignore the remaining moves. So the idea is that we need a large enough move count so that the individuals have moves to spare. The exact number you pick for this would depend on your maze size and complexity, but it’s generally better to shoot high. We can then reduce the move count over time, relying on the genetic algorithm to pressure individuals to optimize their moves.
I wonder if they apply stuff like this to robotics. I know they have more advanced machine learning and all that, but I feel like if you're trying to make a biped like Tesla and other companies are doing, maybe mimicking biology would be a smart way to do it since that's how we ended up the way we are, through this same process.
i was using Genetic algorithm to tune a PID controller, my problem is that the values of Kp, Kd, Ki of the PID controller is in decimal values. how do i convert these decimals values to binary values and perform crossover and mutation.
You don’t need to convert, the binary values are just for basic examples of decision making problem. For decimals values, the mutation strategy you may design yourself like adding noises to the good candidate or averaging over all candidates,…
The astonishing thing is that if you try a brute force solution, where your code tries all solutions, it might take millions of years. But genetic algorithms start random and keep finding better and better solutions MUCH faster.
I would say that more than inspired is an abstraction of the concept of evolution in biology and then it can be generalized in the realm of logic and information theory, starting with a set for population of autonomous agents, called P, and then each of these autonomous agents have information and then they have a process of recombination of this information. With any of these sets you can implement evolution. The obvious example in real life would be biological evolution, another example is ideas and the process of recombination is dialog and the equivalent to species would be culture.
The fitness function could use the actual distance through the maze to the exit. You start with the exit and give it a value of 0. Then all subsequent squares attached to it are 1, then 2 and so on.
What if, instead of a single population whose fitness comes from both proximity and exploration, you split them into 2 subpopulations; explorers and improvers/exploiters. Each only gains fitness from its namesake. However, each generation the culled 50% in both subpopulations are filled children from both subpopulations, so the exploration and exploitation percolate
I am glad to have discovered this channel by chance!
A nice example for explaining local maxima problem.. watching this from the Algerian Sahara, keep uploading videos
just found this video while learning genetic algorithms for exams, and can't wait for the next video.😊
Which course is teaching genetic algorithms ?
@@shashankmishra9238 I was curious as well
Wow, it's been year since the last video !? Please keep 'em coming! I could swear that you would show the NEAT algorithm in the next video ? Please do it !
Biology meets Computer Science. This is so cool.
I really like how this went from the absolute basics, until the rarely discussed greedy fitness problem
Thank you, I’m glad you liked the explanation!
Speaking of neuroevolution maze solver, would you test a maze that doesn't "cheat" by making the longest(highest entropy) path also the correct path(like the example maze in some novelty search papers)? Something like the deceptive tartarus environment perhaps? Though I reckon it might be a tough challenge better saved for later(or even bleeding edge AI research). Having multiple longest paths with only 1 correct answer might be a simpler approach
Very interesting thoughts! I'll be looking into simpler approaches first, as they are easier to explain, but I'll certainly want to revisit more advanced ideas in the future!
@@argonautcode you're welcome!
I see, guess the maze with multiple longest paths would me more suitable for that
That was just wonderful!
Thank you for your high-quality work
this is so good! i love your aesthetic!
Absolutely loved the explanation on this one!
This video help me to understand TPOT in basic way. Thanks!
this is pretty well made!! thank you
great work friend, thank you and keep it up please
Amazing explanation!! Clear and very useful!! Thanks
wow amazing explanations and animations
Very cool visualizations. I liked it very much!
Really great video!
I'd love if you shared how you made the visuals for this video, particularly what'd you use for the fitness function visualizer and statistics
great video! keep up the good work!
underrated af
I’m curious how to implement this problem with the visualisations.
Seeing is believing, and I’d love to make something like this to begin to comprehend it. Do you have any recommendations or planned tutorials on how we could create this maze problem?
Thanks. Subscribed!
this video is amazing broo
amazing video, congrats!
can you open source the codes for us to experiment with it, codes for the visualize and how did you do such an amazing animations
really interesting, looking forward to jump on the code,. It would be nice to have it in python instead of java but, any one has his own preferences! thanks for the video!
Well explained.
Also, the local maximum problem could be solved by using BFS to compute the distance of every legal maze square to the exit, and using that distance as the fitness function. Right?
Yep, you could definitely use BFS!
Sorry, where's the next episode?
It's the following video on his channel.
Representation in a maze is harder than you initially think. I'm having trouble trying to do this. Basically how do you know how many genomes/moves an individual should have? Since they can't really ever grow in genome/move count?
While it’s a little tricky to grow move count, it’s not so difficult to lower it. For example, if an individual solves the maze with moves to spare, we can just ignore the remaining moves. So the idea is that we need a large enough move count so that the individuals have moves to spare. The exact number you pick for this would depend on your maze size and complexity, but it’s generally better to shoot high. We can then reduce the move count over time, relying on the genetic algorithm to pressure individuals to optimize their moves.
Amazing !
Very cool video!
Good presentation
was the next video combining genetic algorithm with nural network brain ever published?
It would be interesting to run this with two species. One that evolves to blend into the landscape. Another to see vision improve to see the prey.
underrated
I wonder if they apply stuff like this to robotics. I know they have more advanced machine learning and all that, but I feel like if you're trying to make a biped like Tesla and other companies are doing, maybe mimicking biology would be a smart way to do it since that's how we ended up the way we are, through this same process.
i was using Genetic algorithm to tune a PID controller, my problem is that the values of Kp, Kd, Ki of the PID controller is in decimal values. how do i convert these decimals values to binary values and perform crossover and mutation.
You don’t need to convert, the binary values are just for basic examples of decision making problem. For decimals values, the mutation strategy you may design yourself like adding noises to the good candidate or averaging over all candidates,…
It is so interesting! 🎉
The astonishing thing is that if you try a brute force solution, where your code tries all solutions, it might take millions of years.
But genetic algorithms start random and keep finding better and better solutions MUCH faster.
I would say that more than inspired is an abstraction of the concept of evolution in biology and then it can be generalized in the realm of logic and information theory, starting with a set for population of autonomous agents, called P, and then each of these autonomous agents have information and then they have a process of recombination of this information. With any of these sets you can implement evolution. The obvious example in real life would be biological evolution, another example is ideas and the process of recombination is dialog and the equivalent to species would be culture.
Thanks!
More please.
The fitness function could use the actual distance through the maze to the exit. You start with the exit and give it a value of 0. Then all subsequent squares attached to it are 1, then 2 and so on.
What the hell? They had one more turn to make to win by generation 3 and just decided not to make it for 14 generations. Completely proposterous.
There are no “bad mutations” from an evolutionary standpoint. There is only common and uncommon (bell-curve).
What if, instead of a single population whose fitness comes from both proximity and exploration, you split them into 2 subpopulations; explorers and improvers/exploiters. Each only gains fitness from its namesake. However, each generation the culled 50% in both subpopulations are filled children from both subpopulations, so the exploration and exploitation percolate
waiting for next video.
Maybe you should consider multiple species, rather than a single one
AND What is the fitness function used after the correct?