Training a Neural Network to operate drones using Genetic Algorithm

Поділитися
Вставка
  • Опубліковано 20 сер 2024
  • After my first try with flappy I wanted to see how would a genetic algorithm handle more complex situations.
    Github github.com/joh...
    Music used
    freepd.com/mus...
    freepd.com/mus...
    freepd.com/mus...

КОМЕНТАРІ • 385

  • @Alayric
    @Alayric 3 роки тому +302

    Good idea, and I like your smoke!

    • @PezzzasWork
      @PezzzasWork  3 роки тому +117

      Thanks! I think smoke is where I spent the most time :D

    • @mendelovitch
      @mendelovitch 3 роки тому +42

      @@PezzzasWork Why do we get hung up on those small sidequests?

    • @I_SEE_RED
      @I_SEE_RED 2 роки тому +17

      @@mendelovitch it’s an easy way to procrastinate the main problem

    • @anujanshu2917
      @anujanshu2917 Рік тому

      How or where you stimulate this in unity or special software

  • @katzen3314
    @katzen3314 3 роки тому +385

    I love how they seem to move so organically even though it seems like a relatively simple model. I bet there's some really interesting optimisation problems and extra restrictions you could throw at this.

    • @katzen3314
      @katzen3314 3 роки тому +27

      Also thanks for uploading the demo and source code, very fun to play around with!

  • @NetHacker100
    @NetHacker100 3 роки тому +511

    I think that the need to center themselves perfectly with the sphere is what makes them not become speed machines. Because when they reach the target they always gotta somehow "dock". And that requires their inertia to be 0 when they reach that point so they have to slow down. If somehow this was changed by making the drones to just need to touch the point at any part and maybe making the orb bigger I would certainly expect that there would be more speedy manoeuvres to just arrive at the target and pass through it. Perhaps even in an elliptical patrolling. Would be certainly interesting to see.

    • @00swinter21
      @00swinter21 3 роки тому +8

      im currently working on the same thing but with more inputs;
      I will try ours too;

    • @eliaswenner7847
      @eliaswenner7847 3 роки тому +4

      @@00swinter21 Don't forget to post the result on your UA-cam channel !

    • @feffy380
      @feffy380 3 роки тому +19

      Exactly my thoughts. It looks like the target requires pixel perfect precision to count as a success. Careful approach is the only way when the targeting criteria are so unnecessarily strict.

    • @christopheroldfield1066
      @christopheroldfield1066 3 роки тому +6

      ​@@Wock__ I believe you are right. On one of their videos, there is an actual clock face that counts down on top of the target, like a circular loading bar.

    • @UnitSe7en
      @UnitSe7en 3 роки тому +12

      The goal is to dock, not to touch the target. Changing the goals to achieve a better outcome does not mean that your model improved. Making them just have to touch the target so they could go really fast does not mean that they are suddenly better. Your thinking is flawed.

  • @WwZa7
    @WwZa7 3 роки тому +211

    I'd love to see a game where your enemies are all neural network trained AI, and the higher the difficulty, the more trained AI variant you will have to face

    • @ChunkyWaterisReal
      @ChunkyWaterisReal 3 роки тому +13

      Give it 10 years

    • @kirtil5177
      @kirtil5177 3 роки тому +42

      imagine if the AI is being trained while you play. The better you play the less hard the ai is, but if you slow down the difficulty increases

    • @marfitrblx
      @marfitrblx 2 роки тому +15

      @@ChunkyWaterisReal it's already possible now lol

    • @ChunkyWaterisReal
      @ChunkyWaterisReal 2 роки тому +1

      @@marfitrblx AI has been shit since the 64 hush yourself.

    • @keyboardegg931
      @keyboardegg931 2 роки тому +4

      Or even the player being an AI - I can totally see a 2D game with your cursor being the target point, and the more you play/the more enemies you defeat/etc. the smarter your character gets

  • @blmppes9876
    @blmppes9876 3 роки тому +162

    5:28, gen 900: Ok, you guys are too good and I'm tired now. Bye!!!

    • @NanoCubeOG
      @NanoCubeOG 3 роки тому

      true

    • @tuna3977
      @tuna3977 3 роки тому +1

      "I have to go now, my planet needs me"

  • @dan_obie
    @dan_obie 3 роки тому +85

    Would be really interesting to add fuel consumption to the mix and watch them optimize their fuel economy

    • @dazcarrr
      @dazcarrr 2 роки тому +11

      and give them more fuel for every target they reach as more reward for doing that

  • @markoftheland3115
    @markoftheland3115 3 роки тому +96

    Very cool stuff, well done!
    Now make them go through an obstacle course 😁

    • @PezzzasWork
      @PezzzasWork  3 роки тому +53

      I am working on it ;)

    • @marc_frank
      @marc_frank 3 роки тому +6

      a combination of the ants finding the optimal path and then the drones following that? :)

    • @Vofr
      @Vofr 2 роки тому +4

      @@PezzzasWork where's the video 🗿

  • @phantuananh2163
    @phantuananh2163 3 роки тому +24

    This channel is a gem

  • @raffimolero64
    @raffimolero64 2 роки тому +2

    love this channel. what separates this guy from others is his consistent ability to make his sims look cool.

  • @osman4172
    @osman4172 3 роки тому +2

    Great work. I think many people would appreciate seeing background of the work.

  • @xDeltaF1x
    @xDeltaF1x 3 роки тому +46

    That end result with the live-tracking is so good! I wonder how viable it is to train simple neural networks like this for game enemy AI

    • @originalbillyspeed1
      @originalbillyspeed1 3 роки тому +2

      Depends on the game, but on games with a clear goal, it is fairly trivial and will quickly surpass humans.

    • @AB-bp9fi
      @AB-bp9fi 3 роки тому +4

      @@originalbillyspeed1 i guess for different difficulty levels game designer can use agents (enemies) from different generations, for example "easy" = generation 400, medium = generation 500, hard=generation 1000.

    • @commenturthegreat2915
      @commenturthegreat2915 2 роки тому +6

      ​@@AB-bp9fi I don't think that would work for most applications. When you want to make enemy AI easier or harder, you always have to think of it in relation to the player - for instance, in a stealth game, harder AI could mean it detects you faster - which pushes the player to improve and be more careful. That won't happen if you just made the enemies drunk (which is basically what would happen if you pick bad neural networks) - it just adds randomness which can be annoying to deal with. Maybe it could work better in things like racing games though.

    • @williambarnes5023
      @williambarnes5023 2 роки тому

      I'm now imagining a game cloud coordinating through the internet. The AI uses background CPU while the game is running to simulate and evolve against itself, spits its best results against the player to see how they fare, and takes those results as more data to go back to the cloud with to keep working. The bots will start laughably bad at first, but they'll learn how players act, and make players devise new tactics... You might even get good teammate and wingman AI out of it if you put those AIs on the player's side.

    • @MrStealthWarrior
      @MrStealthWarrior 2 роки тому

      @@commenturthegreat2915 What about training AI to match the certain level of intelligence? Like if AI detects a player too fast, then it failed the test.

  • @youssefelshahawy8080
    @youssefelshahawy8080 3 роки тому +3

    This is one of the coolest implementations i've seen. Nj!

  • @reaperbs7105
    @reaperbs7105 2 роки тому +4

    Props to Gen 300 and 400 for beings underdogs and yet surviving for so long

  • @the0neskater
    @the0neskater 2 роки тому +1

    This is one of the coolest projects I've ever seen. Would be awesome to extend to add walls and an environment! Great work.

  • @GG64du02
    @GG64du02 3 роки тому +27

    I wrote my autopilot cargo drone for space engineers and still i am impressed by the work

  • @SongStudios
    @SongStudios 3 роки тому +1

    Dude I love it when they get sooo roofless! So fun to watch!

  • @s.m8766
    @s.m8766 Рік тому +2

    very nice! I'd love to see the same tests, but with added random disturbances like wind gusts from the side, to see how well they can adapt to that!

  • @YellingSilently
    @YellingSilently 2 роки тому

    The end of play lineup was a cute touch. Nice work!

  • @thorbenpultke1350
    @thorbenpultke1350 3 роки тому +2

    Impressive Stuff! Had my hands on GAs too for my Bachelor Thesis but with a 6 DOF 3D acting robotic arm. Kinda addicting when you dive deep down in ML :)!

  • @noiky6164
    @noiky6164 2 роки тому

    OMG This is so cool, your video actually change my attitude toward neural network from hate to love.

  • @Phiwipuss
    @Phiwipuss 3 роки тому

    5:56 The drone in the left down corner synchronized with the beat in the music. Perfection.

  • @dromedda6810
    @dromedda6810 2 роки тому

    gen 400 is like that one kid in your class that cant stand still when waiting in a queue

  • @skoll6007
    @skoll6007 2 роки тому

    1:58 that faint Vader "noooooo" put me on the floor for some reason

  • @motbus3
    @motbus3 Рік тому +1

    It would be great to have a remake of this one

    • @PezzzasWork
      @PezzzasWork  Рік тому

      I am actually working on a follow up :)

    • @motbus3
      @motbus3 Рік тому

      @@PezzzasWork noice! I will certainly watch it

  • @darkfrei2
    @darkfrei2 3 роки тому +1

    Very nice! Please make more such content, with neural network and drones! :)

  • @Fallout3131
    @Fallout3131 2 роки тому

    That drone that got yeeted at 5:30 had me dieing 😂

  • @manuelpena3988
    @manuelpena3988 3 роки тому +6

    xDDD the "ok..." almost kills me

    • @Zygorg
      @Zygorg 2 роки тому

      The memes are fun on this vid

  • @Reverend-dd2lq
    @Reverend-dd2lq 2 роки тому +1

    Getting some strong Factorio vibes at 4:57

  • @jeremybertoncini6935
    @jeremybertoncini6935 Рік тому +3

    Hello,
    very interesting work !
    Did you think about testing scenarios with obstacles ?
    It would be also interesting to compare the last trajectories and controls with optimal control algorithms solutions.
    Cheers.

  • @DeepRafterGaming
    @DeepRafterGaming 3 роки тому +6

    I suggest to add more then just time to the fitness equation. Fe. Energy use, pressicion, stability of flight and adding external forces like wind. with these factors the movement would become smooth like silk. But nice project anyway

    • @PezzzasWork
      @PezzzasWork  3 роки тому +4

      The current fitness evaluation takes speed, precision and stability into account. I tried to add wind after the training was done and it worked quite well :)

    • @DeepRafterGaming
      @DeepRafterGaming 3 роки тому +2

      @@PezzzasWorkahh I see, but the angled engines while hovering still seem very inefficient to me :)

    • @PezzzasWork
      @PezzzasWork  3 роки тому +4

      @@DeepRafterGaming Yes you're right and I don't really know why they do this. My assumption is that it is a way to reduce power, as if they couldn't go very close to 0 power so it is easier to add angle. This could be avoided by taking energy into account in the fitness function. If I increase gravity, they don't angle the thrusters to gain more power. Here is a windows demo with a config file if you want to try it out github.com/johnBuffer/AutoDrone/releases/tag/v1

    • @DeepRafterGaming
      @DeepRafterGaming 3 роки тому

      @@PezzzasWork Yeah it's hard to tell why. The fitness function is the most complicated part of any neural network.
      I would allway advocate for implementing energy use in any neural network because, if you think about it, if the network doesn't have to bother with the used energy it will always come up with unnecessary movement patterns that look jenky. It's more important than speed I'd say ^^

    • @jetison333
      @jetison333 3 роки тому +3

      @@PezzzasWork if you watch the way generation 5500 flys sideways, it ends up with one thruster almost horizontal and the other almost vertical. They might like tilting the thrusters because its kind of an inbetween state between flying right and left. So when it gets a new target, it can start flying towards the target sooner. That might be part of the reason anyway.

  • @Lengthy_Lemon
    @Lengthy_Lemon 2 роки тому

    You are amazing. Thank you for sharing your fascinating work.

  • @JavierAlbinarrate
    @JavierAlbinarrate 2 роки тому

    Beginning of the video: LOL!! those squeaks as they fall are really funny
    End of the video: let's run to buy some food cans before they come for me!!!

  • @aiksi5605
    @aiksi5605 2 роки тому

    This video felt like it's 30 minutes because I somehow kept falling asleep every ten seconds or so.
    And it's not boring and no I am not high, idk I guess I just got tired or something

  • @908animates
    @908animates 2 роки тому

    Imagine spending hours and hours trying to get to something and then when you finally get there you just have to go to another one

  • @jayshukla6724
    @jayshukla6724 2 роки тому +13

    7:24 Loved how the Gen-400's legs synced with the music...
    Btw, How do we decide the size of the hidden layers? Is there some rule or formula for the best size approximation?

  • @dromeosaur1031
    @dromeosaur1031 3 роки тому +1

    Thanks for the video! It's really inspiring.

  • @raphulali8937
    @raphulali8937 3 роки тому +3

    i have no idea about how you did it ..but it seems like something fun to learn

    • @PezzzasWork
      @PezzzasWork  3 роки тому +1

      Machine learning is extremely fun and addictive :)

    • @00swinter21
      @00swinter21 3 роки тому

      @@PezzzasWork can confirm

  • @ferociousfeind8538
    @ferociousfeind8538 3 роки тому +3

    You could turn the target tracking into a game, try to get the drone to lose control as quickly as possible, using your mouse as the target! Or, just play with it. It looks fun.

    • @DogsRNice
      @DogsRNice 2 роки тому +4

      Give the target to another network that tries to learn how to get the drones to crash while the drones learn how not to crash

    • @angelo.strand
      @angelo.strand Рік тому +1

      @@DogsRNice oh no the ai wars

  • @KiemPlant
    @KiemPlant 2 роки тому +1

    Other than giving us almost 20 seconds to read 6 words at 4:39 this was very enjoyable to watch :p

  • @ThePizzaGoblin
    @ThePizzaGoblin 2 роки тому

    I like how it learned to turn off its thrusters to arrest upward motion and to speed up descent.

  • @bobingstern4448
    @bobingstern4448 3 роки тому +2

    im more impressed by the smoke, great project though!

  • @flight_risk
    @flight_risk 2 роки тому

    somewhat smaller models and policy gradient following might have increased convergence speed. MLPs are differentiable, so you could just backpropagate through them, sampling distance to the target at every frame and accumulating rewards over the trajectory for an unbiased estimate of a policy’s optimality. you could even use a decay term to incentivize the robots to move faster by downweighting rewards acquired later in the trajectory: distance to the target is ideally the same in the end, but according to the gradient of this reward function, faster would be better.
    the only thing left would be running the simulations in parallel or faster than real-time by simply not fully rendering the state of the environment at every training step

  • @alessandrodamato5059
    @alessandrodamato5059 2 роки тому

    give a consolation prize to generation 300!
    It deserves it all
    Have you ever tried using a neural network on a hardware platform?

  • @EsbenEugen
    @EsbenEugen 2 роки тому

    The target tracking would be cool for a background

  • @artherius535
    @artherius535 2 роки тому

    400 was such a trooper

  • @quinn840
    @quinn840 2 роки тому

    Pls make more vids like this I love them

  • @thesteveremix
    @thesteveremix 2 роки тому +1

    i want to see how chaotic it will be if the drones had collision

    • @PezzzasWork
      @PezzzasWork  2 роки тому

      I will try this, that’s a good idea ;)

  • @kovacsattila8993
    @kovacsattila8993 3 роки тому +1

    I tryed the mouse controlled vesion what you uploaded on github. And i saw that it's easy to confuse the A.I. in that way to lose controll and fall off the map. I think if you crate a small Trainer A.I. for the target control what best interest to confuse the drone and make it fall off the map, it can train the drone to not fall off no matter how the target moves.

    • @PezzzasWork
      @PezzzasWork  3 роки тому +3

      Yes I did a more robust version that I can upload as well

  • @spoo77jj78
    @spoo77jj78 2 роки тому

    "Im a Hovercraft like my Father before me and his before him!"

  • @argmentum22
    @argmentum22 3 роки тому

    Adding a fuel allowance would probably add a more varied result, possibly get those burn hard drones quicker. Also maybe increase your destination bubble a fraction ? This increase the prize rate and hopefully the drones would tighten up the homecoming naturally like the ants do for food routes

  • @darkfrei2
    @darkfrei2 3 роки тому +5

    Which parameters give the drone positive or negative feedback?
    Is flying time a positive or a negative parameter? An acceleration to the target?

  • @gummygrimoire5251
    @gummygrimoire5251 2 роки тому

    "300! 400! you're embarrassing everyone!"

  • @xandon24
    @xandon24 3 роки тому

    7:25 the music moves to your left and right ear as the drone in the top right moves it's power to it's left and right thruster.

  • @veggiet2009
    @veggiet2009 3 роки тому

    oooh idea. Space Invaders: Drones Addition. Different levels use different generations of drones as enemies.

  • @jbeltz5347
    @jbeltz5347 3 роки тому

    Micheal Reeves breaking out in a cold sweat in the corner

  • @keltskiy
    @keltskiy 2 роки тому

    This would be a great premise for a game where the character tracks the mouse so instead of controlling the character you're directing it and it gets better as you play through AI learning

  • @cainanlove8432
    @cainanlove8432 2 роки тому

    And you did it with two hidden layers, nice! Also, you have to give it a gun now I mean come on. Let's see the the level 5500 drones beat a human being.

  • @SomeAutomaton
    @SomeAutomaton 2 роки тому +3

    Ok, now make these drones fight in groups of 5, they can kill other drones in 2 ways one is to ram into enemy drones (killing both of them instantaneously), or shooting them with miniguns (only killing the target if it is hit X amount of times). But every time when they die they respawn, smarter, faster, more accurate, etc.

  • @ziggyzoggin
    @ziggyzoggin Рік тому

    I'm kind of upset that you didn't publish the thing at the end on itch. Its so satisfying to see the drone follow your mouse and I want to play around with it. Great video!

    • @PezzzasWork
      @PezzzasWork  Рік тому +1

      You can download the control demo here github.com/johnBuffer/AutoDrone/releases/tag/v1

    • @ziggyzoggin
      @ziggyzoggin Рік тому +1

      @@PezzzasWork thank you! :)

  • @laurv8370
    @laurv8370 Рік тому

    so, it was you who programmed my dog to run after the laser pointer.... 🤔

  • @Andrecio64
    @Andrecio64 2 роки тому +1

    1:05: this one looks like Los Angeles Battle drones

  • @JuanPabloLorenzo.
    @JuanPabloLorenzo. 3 роки тому +2

    Great video! How long have you been training them? Greetings from Uruguay!

  • @angelodeus8423
    @angelodeus8423 3 роки тому

    it's cool to see your using dropout, so it learns better

  • @crristox
    @crristox 3 роки тому +1

    What about creating new variables? Like saving fuel or energy consumption, or giving priorities like speed over energy/fuel consumption

  • @chinmayghule8272
    @chinmayghule8272 2 роки тому

    That was really cool.

  • @bluecrystal_7843
    @bluecrystal_7843 2 роки тому

    if you had an body orientation/angle input they would have been able to recover from a spin out or even fly upsidedown

  • @Duros360
    @Duros360 2 роки тому

    It’s like they’re scared to collect it, knowing that once they do they’re likely to die xD

  • @Success_Unlimited_
    @Success_Unlimited_ 2 роки тому +1

    Nice work! Can you propose me material so that I can understand in practice how to build a neural network? Something with examples.

    • @PezzzasWork
      @PezzzasWork  2 роки тому

      That's a good tutorial idea, I will think about it :)

  • @cathsaigh2197
    @cathsaigh2197 Рік тому

    Gen 2600 was a big leap in speed and control.

  • @Karol52752
    @Karol52752 2 роки тому +1

    Now you can make game with mouse controled drones

  • @tylerbunnell8714
    @tylerbunnell8714 Рік тому

    I would love to see what happens if you give them a finite amount of fuel to manage. Have the fuel decrease quickly/slowly depending on how hard they burn thrusters. Extra bonus for fuel remaining when the task is complete. Death if you run out of fuel.

    • @00swinter21
      @00swinter21 8 місяців тому

      i think just counting how much fuel is used based on thrusterpower and then rewarding low numbers is better then limiting fuel

  • @baconofburger8784
    @baconofburger8784 3 роки тому +4

    why not add a fuel limitation (which would refill once they get to a point) forcing them to switch between points as quickly as possible from the beginning

    • @chfr
      @chfr 2 роки тому

      wouldn't be necessary, they're already rewarded for speed

  • @SoulZeroTwo
    @SoulZeroTwo 2 роки тому

    After a few tweaks, I have a feeling this could have real-world use.

  • @Agustin-io4pw
    @Agustin-io4pw Рік тому

    Imagine one of these things chasing you irl.

  • @markvarden3802
    @markvarden3802 3 роки тому

    I would love for you to make an eco system like the bibites using those drones

  • @memento9979
    @memento9979 3 роки тому

    I like these projects !

  • @jakobheiter355
    @jakobheiter355 2 роки тому

    You should make a game out of this, it looks very funny!!

  • @cirogarcia8958
    @cirogarcia8958 3 роки тому +2

    I love this! I'm gonna implement it right now in Python. What genetic algorithm were you using? I'm planning on using Neat

    • @CE-ov7of
      @CE-ov7of 3 роки тому +1

      how did you get this environment in Python? I want to test policy gradient RL algorithms

    • @j_owatson
      @j_owatson Рік тому +1

      @@CE-ov7of not sure if you still need this question answering however i'll give it my shot. My guess is hes implementing the basic algorithm of the envirment in python using pygame and and numpy. Then for the AI my second guess is he'll be using NEAT Python library or custom AI/NN algorithm for the agent and training. That's my guess however if you want any question just reply and i'll do my best to help. Python isn't my strongest language however but i'll try my best.

    • @CE-ov7of
      @CE-ov7of Рік тому +1

      Hey @@j_owatson , unfortunately this is not something I have time/interest for anymore.
      But I really appreciate your willingness to help! This is what makes the software/tech community great!

  • @petersmythe6462
    @petersmythe6462 3 роки тому

    Would be interesting to have a drone sumo where they can collide and try to shove each other out of a ring.

  • @thetafritz9868
    @thetafritz9868 2 роки тому

    the target tracking drone would be a really cool and distracting extension, it follows your cursor around where ever you put it lol

  • @damemes3669
    @damemes3669 3 роки тому

    Nobody:
    Generation 200: SPEEN

  • @Vinz_1223
    @Vinz_1223 3 роки тому

    Now create an additional network which positions the orange dot (target) to navigate around obstacles on its own.

  • @eyalsegal6730
    @eyalsegal6730 2 роки тому +1

    Nice work!
    What mutation/crossover did you use?

  • @mytechpractice8924
    @mytechpractice8924 2 роки тому

    Totally amazing!!!

  • @kahwigulum
    @kahwigulum 3 роки тому

    Gen 5500 appears to display knowing how to fall rather than turning the thrusters to push itself down.

  • @Jacques_JvR
    @Jacques_JvR 2 роки тому +3

    Now make it 3d and hve the gen 5500 implemented in there, have them master the flight in 3d, then make the difficulties higher. Then after all that, put the best gen into a irl drone and have it fly around

    • @cyanisnicelol
      @cyanisnicelol Рік тому

      But what should be the target

    • @ziggyzoggin
      @ziggyzoggin Рік тому

      its so funny to me how people in comment sections always say "now do [INSERT UNREALISTIC EXPECTATION HERE]" like there's so much difference between simulating drones, and making a drone in real life

  • @MarkusBurrer
    @MarkusBurrer 2 роки тому +1

    you should place the targets randomly and not in a specific order. And for more challenge, they only have a specific time to reach the target. After the time the target disappears. And finally, the targets are fuel. If they miss too often they run out of fuel.
    Edit: maybe even add obstacles.

    • @PezzzasWork
      @PezzzasWork  2 роки тому

      In the video the targets are in a specific order to be able to benchmark the different generations, for the training I used random sequences

    • @MarkusBurrer
      @MarkusBurrer 2 роки тому +1

      @@PezzzasWork Ok, that makes sense

  • @Srindal4657
    @Srindal4657 4 місяці тому

    Programmers and scientists are going to have a lot to study on neural networks, maybe thats what new AI will provide. Just more information for humans to expand their minds

  • @linsproul3548
    @linsproul3548 3 роки тому

    you should make a game where you control a small ship like asteroids and your goal is to juke out the drones and cause them to crash or see how long you can survive before they hit you or something

  • @Algok17
    @Algok17 3 роки тому

    Very nice result!

  • @guillearnautamarit9102
    @guillearnautamarit9102 2 роки тому +1

    Wow that's amazing and looks amazing, how did you cross the two neural networks?

  • @thefunnybuddy4138
    @thefunnybuddy4138 2 роки тому

    1:41 Lmaoo. This looks like flappy bird in terms of difficulty.

  • @241lolololol
    @241lolololol 3 роки тому +1

    man this is so cool. a bit off topic but how are you rendering the thruster particles and smoke?

    • @PezzzasWork
      @PezzzasWork  3 роки тому

      The smoke is just made out of static sprites and the thruster particles are baked into the flame's texture

  • @ExotiC255
    @ExotiC255 Рік тому

    Gen 400 is an all time favourite haha

  • @abeltoth1878
    @abeltoth1878 Рік тому

    Really cool project!!!
    I was wondering what fitness function you used?

  • @mawa5702
    @mawa5702 3 роки тому

    Love that video

  • @ravenatorful
    @ravenatorful 2 роки тому

    While it was nice for the visual of all the different generations together, I feel like it would have been better to randomize the dot locations so that they have to learn to adapt to a new path every time

  • @ethos8863
    @ethos8863 Рік тому

    You may have to select more aggressively for speed. They seem a bit slower than what the optimal handmade algorithm could do

  • @hakmedolarinde8183
    @hakmedolarinde8183 2 роки тому

    Man it would be sick to model a 3d one

  • @sded7126
    @sded7126 3 роки тому

    Dude the physics look so polished. This is amazing!

    • @UnitSe7en
      @UnitSe7en 3 роки тому +1

      Acceleration (gravity, mass an inertia) is probably the simplest physics properties to program. Literally just adding or subtracting numbers. He does not require your compliments on the physics.

    • @cobaltxii
      @cobaltxii 3 роки тому

      @@UnitSe7en ?

    • @cobaltxii
      @cobaltxii 3 роки тому

      @@UnitSe7en shut the fuck up, he’s giving him a compliment