I am one of the faithful visitors of your great contents and really appreciate your rewarding efforts and time. I'll be grateful if you address the statistical analysis with Python as a cornerstone of datascience, if applicable.
use these constant values to get the result with 100% accuracy POPULATION_SIZE = 500 # Reducing from 20,000 for efficiency GENOME_SIZE = 20 # Assuming this is fixed MUTATION_RATE = 0.02 # Small increase to improve exploration CROSSOVER_RATE = 0.7 # Encouraging more crossovers GENERATIONS = 100 # Increasing to allow more evolution
I think the reason why the fitness wasn't increasing was in the function select_parent(). While higher fitness individuals may have a better chance at reproducing, it isn't likely enough for them to reproduce
Possibly a dumb question, but what is with the a:0 and b:0 on line 11 ? It was almost like the IDE added those in or something. If I typed them in, I got invalid syntax. If I just had randint(0,1) instead, on line 11, it worked. Thanks.
I'm not sure about the select_parent() function. You are going through the pool of candidates, cumulating their respective fitness, until you hit the first candidate who's cumulated fitness is bigger than some random threshold. How does that guarantee that a candidate with a bigger fitness is statistically more often chosen over a candidate with less fitness? Shouldn't be some type of sorting? I get the impression, the candidate pool is randomly sorted; we're randomly choosing a threshold point; and therefore randomly returning any candidate that just happens to be the first to cross the (cumulated!) threshold. WDYT?
A better implementation would be to first perform elitism, where say 10% of the solutions with the highest fitness are automatically entered into the new population. Then you could select the parents through tournament selection which would compare n amounts of solutions, with the best one (Highest Fitness) being chosen as a parent. After performing tournament selection to get 2 parents you could then proceed to crossover as described in the video. I believe this would achieve what you wanted, with more fit solutions being chosen over weaker candidates.
You show interesting code, but you haven't tested it before, and are learning how it performs on-camera. How about spending an hour beforehand figuring out exactly what to show?
I am one of the faithful visitors of your great contents and really appreciate your rewarding efforts and time.
I'll be grateful if you address the statistical analysis with Python as a cornerstone of datascience, if applicable.
[17:08]: it's a binary state, you can keep it as simple as `genome[i] = not(genome[i])`
Great video! I'm working more and more to optimize my work processes. And Never actually thought about using this. Thanks!
use these constant values to get the result with 100% accuracy
POPULATION_SIZE = 500 # Reducing from 20,000 for efficiency
GENOME_SIZE = 20 # Assuming this is fixed
MUTATION_RATE = 0.02 # Small increase to improve exploration
CROSSOVER_RATE = 0.7 # Encouraging more crossovers
GENERATIONS = 100 # Increasing to allow more evolution
10:42 that's a slick generator. ty vid
Again taking over Awesome town! THX
I think the reason why the fitness wasn't increasing was in the function select_parent(). While higher fitness individuals may have a better chance at reproducing, it isn't likely enough for them to reproduce
Thanks for sharing the concept
Great video, thanks!
Csn you tell machine learning algorithms like candidate algorithm and decision tree algorithm
it's really helpful, please it is possible to use Evolutionary algorithm to create workout plan?
another interesting video. Thanks a lot :)
I think game theory is interesting too
Thanks you
Possibly a dumb question, but what is with the a:0 and b:0 on line 11 ? It was almost like the IDE added those in or something. If I typed them in, I got invalid syntax. If I just had randint(0,1) instead, on line 11, it worked. Thanks.
I'm not sure about the select_parent() function. You are going through the pool of candidates, cumulating their respective fitness, until you hit the first candidate who's cumulated fitness is bigger than some random threshold. How does that guarantee that a candidate with a bigger fitness is statistically more often chosen over a candidate with less fitness? Shouldn't be some type of sorting? I get the impression, the candidate pool is randomly sorted; we're randomly choosing a threshold point; and therefore randomly returning any candidate that just happens to be the first to cross the (cumulated!) threshold. WDYT?
A better implementation would be to first perform elitism, where say 10% of the solutions with the highest fitness are automatically entered into the new population. Then you could select the parents through tournament selection which would compare n amounts of solutions, with the best one (Highest Fitness) being chosen as a parent. After performing tournament selection to get 2 parents you could then proceed to crossover as described in the video. I believe this would achieve what you wanted, with more fit solutions being chosen over weaker candidates.
do you think using PYGAD could make genetic algorithm easier ?
The fitness values in the one max problem were off
how if the population is not binary state
Please share the code.
moar of those!
Wow❤❤❤
Hi
You show interesting code, but you haven't tested it before, and are learning how it performs on-camera. How about spending an hour beforehand figuring out exactly what to show?