Showing the AI's multiple training runs simultaneously and color coded is one of the most gorgeous and elegant depictions of machine learning I've ever seen. Thank you for this video!!
@@kossboss i think the game comes With a tool or there's popular plugins for it (it has a huges player base) to show a lot of 'ghost' laps, there's lots and lots o vids outhere of videos showing that. For the coloring, i guees is what the previous comment said, anyway the game allows to make your own liverys and some people use mod the models. To look like something else, so probably is a middle point.
@@principeSZN But the thinking, planning, learning up to that point played a huge role in shaping how he figured these procedures out and how to best interest and reach the audience. This is the culmination of all his hard work. His legacy began 3 years ago, and he's still going!
@@teakfreeman3543 take basketball for example, the coach doesn't just have the players play the game and hope they discover all the different skills, he sets up drills
@@teakfreeman3543 each drill focuses on developing a certain skill, like thousands of jump shots, then later in the game it's another option the player can use
I always love these training AI videos. Its like the "wizard knows his apprentice will be more powerful than him one day, but only if he is guided there" vibe
J'ai examen d'introduction à l'IA demain matin, je viens de te regarder expliquer toutes les étapes de reinforcement learning et les illustrer en faisant du TM, c'est parfait. Merci, je comprends et visualise beaucoup mieux cette technique d'apprentissage et ça permet de se rendre compte de la beauté de cette méthode
What a magnificent video and what an incredible journey. This got me excited all over again. You sir took quality to a whole new level, you & the AI should both be very proud :)
oh y I remember, definitely viewed it as beatable but painful to a degree hehe, although now clearly... not so easy. The last one now seems very beatable as mentioned but as for the first two... hats off, I don't see anyone beating it anytime soon@@yoshtm
@@yoshtmAre you familiar with the "micromouse" competitions where they train a mini computerised car to zip through a maze. Could the algorithms there be benefit this project?
This is fascinating... I loved how you taught the AI to drift and then once it learned how it kept doing it without the rewards applied to the action. It shows that it was still focused on the end goal but using skills it was taught when they would help it achieve a faster time. Really amazing to watch.
This project is still (I followed it for a long time) one of the most interesting projects of UA-cam. It is indeed fascinating what a smart guy with a laptop can do at home… I’m an engineer and passionate about gaming, I couldn’t even think of how to accomplish such thing. Congrats to the author! 👏🏻
@@mifluffy5196 The trick would be to run the simulations faster than real time on a server farm. I know it can be slowed (Riolu!) so I'm sure it can be sped up as well? It would be amazing to see where this could go.
@@inthefade The true challenge would be the pathfinding. An untrained AI would take billions(if not many factors of 10 more than billions) of simulated years to figure out how to drive the map. So you would either have to start the ai with a reinforced path like Muddas run. Or have a godly pathfinding algorithm. Otherwise the AI would just start driving the wrong way.
@@ChuckSploder My first thought was to structure the layers in a way to take location data. This way the network will respond differently depending on where it is in the map.
Seeing this project come to fruition after three years is incredible to me. I watched from the very beginning and thought it was amazing how you could teach an AI to be the human, and now that we actually get to see it put into play is amazing. Love your work so much, and I guarantee that it will only improve itself from here! I have tried to beat the AI myself and it really has progressed so much. I cant wait to see pro players actually holding a challenge to this AI after such a long dream. ❤
Your meticulous dedication pays so many dividends. Keep doing what you're doing, it's insanely entertaining and informative. I guarantee you're persuading millions to become programmers, AI experts, ect
Well done with this video, dude! So much work must've gone in and you present it so nice and calmly and... Dunno. I haven't played TM for a decade but what a fun watch!
how the hell does this channel below 100k subs with this level of dedication, nice editing, and beautiful data display? This feels like 1M subs kinda level
mainly due to its a game that has had a total of about 10million players over the last 20years , and not a major title like CSGO etc. . and about 36% of those have seen this video ! @@azultarmizi
As a an AI engineer who played lots of trackmania growing up i thoroughly enjoyed this video. Great editting too and great to follow your journey and thought process!
it was to be expected. When you ask an algorithm to find the best way, it always will according to its capabilities and limitations. Not limitations = problems^^
It's how humans approach problems too. Like in teaching jobs that incentivize pass rate over anything else, teachers will prioritize passing students over making sure they actually understand the material.
Same with support centers. You award the agents on number of tickets closed, you will get tons of closed tickets, but the quality will suffer. @@benknoodling3683
Really interesting approach to have the AI train with a "wrong" reward for a while to overcome a local maxima hard to find ways out of otherwise. That feels like it has some really good parallels to human learning, where a good teacher can help you immensely in how quick you learn something new. Or how athletes sometimes train using special limitations or disadvantages to improve their ability in specific cases in their sport - all to be better when they go into a normal competition. Awesome video.
Same principle in life. You can learn from mistakes and failures and take away positives from them. A full technique may not work but partial techniques can be applied in certain situations. It's the foundation of bruce lee's martial arts. Learn as much as you can, use what works best. There's not really any downsides on iteration.
There are many cool approaches to fix local maxima problems, usually inspired by real life processes. For example one method is inspired by how a heated metal cools down and the excitation of the particles, the algorithm has a large chance of picking a random option instead of trained one and that chance decreases logarithmically as time goes on, to not get stuck at first but expecting to settle as time goes on
Yeah, it's basically what we do when we practise a specific skill that doesn't grant satisfying results on its own, but will help improve the bigger work.
The AI journey feels like a lesson in consistency. It was kicking your ass even without the brakes, and that was purely through being so thoroughly consistent in the corners. The final version with the neo-drifting was even drifting much cleaner than you were, too. It was a joy to watch.
definitely subscribing. Ive seen a few of your other videos but i figured training the ai was going to be a one off thing but the fact you plan on teaching it more (hopefully a lot more) is super exciting. Ill be counting the days :)
Color shading the cars and the training progress bar is a really nice edit! It's not always exciting to see an AI drive badly, but watching the yellow to green is pretty great anticipation.
To make it more robust, it could be interesting to add a variety of extra conditions: - you are already spawning in random spots but you can also spawn in random states, i.e. in different orientations and speeds, possibly including upside down so it has to learn to turtle and recover (where that is possible) - in addition to the lateral motion reward, you can try somewhat randomized state rewards (reach weird parts of the state space) - or even exploration rewards (you can basically do a coarse histogram of all possible internal states and then reward it for even coverage of that histogram. Rather than as fast as possible, it should be driving in a way that finds as many states as possible while still finishing the map.) - or action constraints (disable breaking from time to time. Disable *forward* from time to time so it has to learn to deal with backward driving. Maybe occasionally even disable left or right. It's also possible to do "sticky actions" where you just randomly make it commit to an action for a few frames rather than being able to change the action every frame) - or senses (disable some of its inputs either by zeroing them out or by sending random noise through them) - or road conditions (you already mentioned those so I'm guessing your next video is gonna tackle that) - or physics (you also mentioned this as well) People have also experimented with a very weird robustness strategy where you basically add spurious inputs (they just get noise as input) but then *shuffle around which input corresponds to which value* so the AI has to learn to spot patterns in the inputs to figure out what those inputs likely mean before actually acting upon them. All of these together, or even just a solid subset, should make for a really robust and multitalented AI that can theoretically achieve just about anything that can be achieved in the game. Like, in terms of finding the state. Not necessarily yet in terms of beating world records. And then, once you have that, you just finetune that basic broadly capable AI without any of these constraints on any map you like. It'd basically be what you did with the drifting here but training it towards much broader capabilities as a starting point.
@@owendeheer5893 So I don't remember what particular network style they chose for this. I think it was a recurrent neural network? But anyways, the basic idea is pretty simple: In addition to all the inputs that already are there, you add a bunch more. (Say, three more neurons or whatever) Those extra inputs simply get fed random noise, so they aren't going to be meaningful to the training what so ever. But the twist is, that you then ever so often (after, say, a second) *randomly swap the order* of those inputs so the layer after doesn't know for sure which input is which. It has to figure that out based on the received signals. That way it learns not only to relate input patterns to output actions, but necessarily also what typical input patterns look like. It has to work way harder and "pay way more attention" if you will, to still get a meaningful result. IIRC the idea was, that you can use this to make it possible to extend the network after training. Like, instead of noisy inputs, you can then add additional actually meaningful ones, and there is a chance the network manages to generalize over those additional inputs. To be clear, I don't actually think that particular augmentation would be useful here. Most likely, it *could* be, but it would require a larger network just to allow for the overhead of internally swapping around the data to be routed correctly. IMO the most powerful ones I mentioned are likely to be the road conditions and physics tweaks alongside the histogram over state space (that's somewhat related to Map Elites, although that's an evolutionary algorithm so not quite the same. I think there is a variation of that which is meant to work in this setting though. Differential Map Elites or something?) Simply increasing diversity (i.e. training over multiple maps at the same time) is also likely to give rise to gains, especially paired with a curriculum so easier maps are experienced earlier and more challenging ones later. (Very challenging maps initially will only cause a lot of noise slowing progress even on easier maps, until the AI acquires some basic skills. Definitely don't train it on Kacky maps right away lol)
@@Kram1032 "But the twist is, that you then ever so often (after, say, a second) randomly swap the order of those inputs so the layer after doesn't know for sure which input is which. It has to figure that out based on the received signals.": This sounds like an attempt to force location invariance (it doesn't matter where the signal comes from just the relative strength between the signals as a whole). Which seems only usefull in cases where you want the network to learn statistical things, or you want to force the network to encode information in certain "interesting" ways.
Impressive. I'd be interested in the feeling you had when you finished this video and uploaded it. After so much time, tries and struggle. Congratulations. You did awesome!
i have been waiting for this video for a year now and i must say im more then surprised by how good it is! Incredible editing, voice over and content!!
Hey thanks Labomba! I've always been very inspired by the Trackmania k-projects of a few years ago, to make these videos. I've watched the one on your channel several times so I'm glad you saw this video ;)
This is the third Trackmania documentary I've watched on UA-cam. I have never played Trackmania. I do not play racing games at all. Yet these docs are incredible!
Wirtual should talk about how huge the implications are in speedrunning. It could be used as a new cheating method that could potentially be harder to detect than TASes if it gets advanced enough (actually now that I think about it its tells are probs similar if not the same as TASes’ tells)
@@goldenwarrior1186antagonist AIs can be trained that detect a certain probability of it being AI driven. Kinda similar to all the online apps that tell you if a photo is AI generated. I think those need the training model though to work right.
Since you reminded him, now I'm sure that for now AI couldn't beat speedrunners, because they used so many complex tricks, like a bug jump from a nose, etc. To make AI even learn that this is not a mistake, but a feature would be very hard.
At the end of the AI training, it always seems like you just did a few logical changes and then it worked. Coming up with these logical changes is the really hard part. Great work!
Seeing the AI plan it’s most efficient routes was like watching a fluid dynamics simulation, in fact it probably literally is a more accurate representation of such than a lot of CGI engines can perform or “animate”.
Amazing description of how complicated, difficult and challenging is to be an AI engineer. Appreciate that you shared all ups and downs (especially) of such journey. Complicated things take years to be made, but in the end it pays off.
This is the spirit of coding, and CS in general. Trying new things that you aren't certain about. Learning by doing. Thank you for putting in over 3 years to make this masterpiece. BTW can we -steal- *borrow* the training data pls?
The main problem with the final test is that this part of the map was still contained in the training data, and for that it means it likely doesn't generalize and instead was over fit at least slightly
It doesn't seem like a problem, just a known constraint on this solution. Technically speaking: say the AI trains thousands of times on each track in existence. It has the capacity to store track-specific data and total, general experience training data. Would it be over fit or appropriately generalized for its purpose? Philosophically, is it that much different than a human player with "favorite" tracks, ones that gave the brain the best feelings to repeat and learn?
Pretty sure that wasn't the point of the final test, as proving generalized success was already determined to be a much harder task. The final test appears solely to prove if the ai truly was driving/performing faster than him or if it was just its consistency giving it an edge due to all the human errors and mistakes a person would make in such a long track. So the final test was realistically just meant to be the exact same test as the full map, but with the potential variable being changed being himself as the player. As it seemed more likely a scenario to further "perfect" or at least minimize as many mistakes from their own times on a smaller snippet of map.. Which wasn't exactly a viable option to avoid so many mistakes on such a long track, with many opportunities for mistakes. Which is also likely why he mentioned that there are surely far better players that could beat the ai on the final smaller track snippet, but he had doubts they could beat the ai on the full size track. As on a track that long any human is bound to slip up and make mistakes with that longer frame to go so far without any mistakes, giving the ai a chance to catch up. While still possible, it would take a lot more practice and luck. So the task of further generalizing and making the ai successful on any track, is a task and test he hasn't actually truly delved into and realized was fully beyond the scope of this video and will likely be tackled in the future. So the final test really did just serve to further demonstrate the ais biggest advantage was consistency and before training it to drift, it wasn't actually driving faster or "better" at all, it just wasn't making the countless human error based mistakes that a player would. Since once transferring to the snippet map and practicing out as many mistakes as he could, it began to close the gap and was being outpaced/raced by him. Then the drift training proved to again close the gap towards actually driving more effectively, but still appears could be outpaced if a player was able to pull off their best version of a perfect or no mistake run. Since like in speedrunning, its easy to have an incredibly skilled player that is well practiced and capable of playing the game quite well, but runners will often have countless attempts, given the numerous frame perfect or otherwise precise moves needed and they have to make all of these numerous separate and precise moves all in the same run. So if you have an ai that can flawlessly and consistently pull of those specific frame perfect moves every time, you could say the ai is faster, but the speedrunner is still likely playing the game as a whole better and the gameplay between those moves is likely faster and better performed, but its easier to make those individual move based mistakes and if they were removed from the equation, the player would likely be faster than the ai with their overall gameplay. So the final test was just trying to remove that same mistake based advantage to the best of their ability, in hopes of improving overall fitness/skill over mere consistency.
@@falinoluiz5962 Not sure if sincere or bait, as I do realize how long in the tooth it all was and I'm not sure if i really said anything of merit. If you are being sincere thats cool, if not, I get it.
I loved how you show exactly what a real, practical use of machine learning ai looks like and the pitfalls associated. the journey was very interesting. great video
I love how you depicted machine learning in this video. A lot of people think AI is this all powerful tool that as long as it has data it can do anything. I like how you demonstrated that AI is more like a baby and that you really need to hold it’s hand every step of the way to make it effective.
Seeing every single attempt made by the AI is so satisfying. And its all color coded too. I love it. Its also really cool watching it slowly understand exactly what its doing.
I like the color coding! Also, I think your idea to give it a specific reward for a certain skill, then taking that away once it mastered the skill, is really genius! Did you get that from a paper? It definitely belongs in one. Kudos to you!
For the AI to be able to reintroduce its new isolated drift skill to its arsenal after the reward was removed is so cool to me. I feel like that is such a massive way to teach AI new skills. Doesn't that imply that you could isolate a thousand different skills, and then have the AI incorporate it all into something massive?
That is in fact how we learn. What I find interesting is we seem to be mapping out the process of learning in general, by having to create it from scratch with AI. Videos like this have taught me how to teach myself new skills, though creating a reward for random things is still not easy.
❤❤😊❤😊😊🎉😊🎉🎉🎉🎉 ❤Truly truly i say to you all Jesus is the only one who can save you from eternal death. If you just put all your trust in Him, you will find eternal life. But, you may be ashamed by the World as He was. But don't worry, because the Kingdom of Heaven is at hand, and it's up to you to choose this world or That / Heaven or Hell. I say these things for it is written: "Go therefore and make disciples of all nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit, *teaching them* to observe all that I have commanded you; and behold, I am with you always, even to the end of seasonal". Amen." -Jesus -Matthew 28:19-20
I have been waiting 3 years for your update on jumps and generalized maps, I will be there in another 5 years. Congratulations on your great work, this is something I also tried to do in 2017, without any prior ML knowledge. I failed and abandoned the project, it's very hard to accept failures like you did in these 3 years, but the rewards are awesome!
Very good video quality, very well explained, calm voice, insane skill, legendary editing, and high intelligence from this channel. One of the most qualitative videos on UA-cam, it really feels like you put A LOT of effort into this
I don't know how well this approach works in terms of generalization. I have serious doubts on it working on any maps that are designed different from the twisty horizontal characteristics of his current test maps. And if he manages to get them working on different maps it will become much slower on the current ones.
@@timbraska6750 I think for the AI to drive well on a variety of maps there needs to be some solid foundation to start off with, then some training on individual maps. This is also fair to human play since we play maps over and over to learn them specifically, while having a lot of experience in general to back that up.
This video must be seen by all. Just so people can get an idea of how AI works through trail and error. How much we can be at a disadvantage against AI. This video is insane. One of my faves of 2023.
Félicitations pour tout ce travail accompli. Je suis persuadé que si tu continue en cette voie tu finira par obtenir une IA performante sur n’importe quel type de map. Qui peut-être même que ça finirais par attirer l’attention de Nadeo ;)
i don't know how I ran into this, but amazing way to showcase AI, how it works and a very passionate player going the 'extra-mile' to make things they love, even better. true mastery of a subject. well done!
From my knowledge the input to the NN isn't nodes with "Distance to centerline" or "Next Curve Distance" it should be the pixels on the screen, the AI then take this input and extract the feature in a latent layer (if they are relevant) and the output of the AI is the input to the car controller (gas, breaks, steer left or steer right). The idea behind it is that some figures might not be reachable by the agent (if you want to make a fair agent that plays "as" a human), and some of the figures might not even be relevant to the agent. The actual input to these reinforcement learning is the pixels on the screen, from those the AI "learns" what works best in these situation, and then generalize it to "situation that looks like this" so if the AI sees pixels that look like a curve is ahead, it would learn to slow down if that feature (curve ahead) is relevant to its performance. About the AI "loves crashing into walls" you should define a new reward function to maximize, look at say 2 minutes period, make it max distance (on track) from start, and don't even look at "race finish" at all, so for example if a race should take 3 minutes if played perfectly, each training session would be 2 minutes and the best score would be the one that reached the furthest (that's how I'd train agent for a racing game). About the generalization, the "pixels as input" approach would be much better because like NN can be train to recognize things, it can recognize map's curvature, it's own orientation, and things that u wouldn't even think about adding as data, things that you wouldn't even think relevant. Also, a minimap is included so if it "decides" during training that the minimap data is relevant it will get the data as a feature in a latent layer. Also, pixels as input approach can be a general AI to any game, the only thing that you would change is the reward function to maximize.
I’ve been following this saga since the beginning and I couldn’t be happier with the conclusion! I can’t believe I haven’t subbed until now, keep up the awesome work! 🎉
Je sais pas comment t'as fait ce montage avec tous les différents passages de l'IA et leur dégradé de couleur...je salue la représentation que tu as donné du programme, c'est très graphique et ça permet de visualiser l'intérieur du programme. Très inspirant. Merci
It’s really impressive how patiently you’ve tried to train the DRL agent at the highest level. Please make more videos about how you systematize your workouts.
Bravo pour ta détermination ! Quelle aventure. La narration et le montage sont dignes des plus grands. Encore une fois bravo pour ta persévérance. Ne lâche rien et les résultats viendront d'eux même !
I am studying in the fields of deep learning, neural networks and this was one of the most well put together and well explained video I have ever seen. Incredible graphs, and explanation. Thank you
One area I can see this AI being really useful is looking at where you yourself can be faster. In the final part where you show it head-to-head with your own best run, there are a few places where it suddenly leaps ahead of you by several tenths of a second in a single corner. Analysing exactly what it does there could show you how you can avoid losing those 10ths yourself.
Both the animations and the ideas are so beautiful and well exectuted, it managed to surprise me even after watching many ai-mastering-games-videos. Good job!
Ça me donne des frissons... ça me rappelle mon projet de recherche de prépa, j'avais entraîné une IA à jouer à Mario Bros grâce à la théorie de l'évolution ! Des heures passionnantes...
Absolutely amazing story telling! I loved the use of color to easily demonstrate progression. One of my favorite aspects about this video is that you actually used your AI on multiple tracks and introduced generalization. I feel like too often people will train an AI to only be good at one specific task while ignoring other scenarios. This is one of the few videos in which you're actually training your AI and having it *learn* how to do something instead of just having it memorize one set of instructions. Memorizing ≠ Learning and you did an *amazing* job having your AI learn. Kudos!
Showing the AI's multiple training runs simultaneously and color coded is one of the most gorgeous and elegant depictions of machine learning I've ever seen. Thank you for this video!!
Yeah about that. @yosh how did you do that just curious. Is it part of the game?
@@kossboss i think the game comes With a tool or there's popular plugins for it (it has a huges player base) to show a lot of 'ghost' laps, there's lots and lots o vids outhere of videos showing that. For the coloring, i guees is what the previous comment said, anyway the game allows to make your own liverys and some people use mod the models. To look like something else, so probably is a middle point.
@@kossbosschanging the car skin based on how far it gets into the track.
@@kossboss at 20:31 he thanks "Alyssia" for helping automate the car skin changes, so I guess there's some sort of script involved
@@kossbossin 😊ppol
Insane production value, the editing that went into this, and the time it must have taken.
Ai still can't beat that! (hopefully)
it took 3 years
@@principeSZN But the thinking, planning, learning up to that point played a huge role in shaping how he figured these procedures out and how to best interest and reach the audience. This is the culmination of all his hard work. His legacy began 3 years ago, and he's still going!
It's not like he's doing these videos for free, genius. You don't need to appreciate him for something that he got the money off of viewers like you.
@@SorakaOTP462 L take bozo
That drift-overtraining and then training off the excess is a really interesting approach to let the Ai discover new mechanics👍🏻
this is what coaches do when they train a new skill for humans, it's interesting to see it applied here, and then removed
Similar pedagogical approach to human learning in general
@@SumGuyLovesVideoswhat do you mean that’s what coaches do? I’m not following
@@teakfreeman3543 take basketball for example, the coach doesn't just have the players play the game and hope they discover all the different skills, he sets up drills
@@teakfreeman3543 each drill focuses on developing a certain skill, like thousands of jump shots, then later in the game it's another option the player can use
This was amazing, like watching a child start to walk then slowly just full sprint. I’m subbed from this video alone
This is nuts. The sheer amount of work involved boggles the mind. Such a well paced and chill presentation too, love it!
Really like the video!
just like a mouse experiment :)
I always love these training AI videos. Its like the "wizard knows his apprentice will be more powerful than him one day, but only if he is guided there" vibe
Like shonen training montage, but protagonist is mentor. Its cute in some way.
I also recognized that I find this type of video extremly satisfiying and I think you perfectly described the vibe of it
The way you taught it to drift, then taught it to use drifting to go faster was brilliant
The AI got a whole _training_ arc if you follow my drift.
@@JMurph2015 The topic is drifting from training to puns so fast that I can't catch up
Give me a brake from these puns!
Can we steer this conversation back to its normal path?
These puns simply TIRE me
J'ai examen d'introduction à l'IA demain matin, je viens de te regarder expliquer toutes les étapes de reinforcement learning et les illustrer en faisant du TM, c'est parfait. Merci, je comprends et visualise beaucoup mieux cette technique d'apprentissage et ça permet de se rendre compte de la beauté de cette méthode
What a magnificent video and what an incredible journey. This got me excited all over again. You sir took quality to a whole new level, you & the AI should both be very proud :)
incredible journey
Thanks Racetas!! I remember you played one of the maps (level 2) last year, I hope you will try again, I just posted all maps on tmx ;)
oh y I remember, definitely viewed it as beatable but painful to a degree hehe, although now clearly... not so easy. The last one now seems very beatable as mentioned but as for the first two... hats off, I don't see anyone beating it anytime soon@@yoshtm
@@yoshtmAre you familiar with the "micromouse" competitions where they train a mini computerised car to zip through a maze. Could the algorithms there be benefit this project?
I absolutely love the part where it becomes the drift king for optimal rewards. Just styling by doing micro drifts constantly.
Car was presented with a carrot and it went "bet"
I honestly laughed out loud when it did it
Suddenly started playing Mario kart xD
I would unironically watch hours of you facing off against this AI on different maps to see what it does, please make more!
Same, this production value is insane too, I would love to see more of ai in video games, it’s really fun to watch.
Yeah, maybe with some tracks that change elevation and tilt.
Please stop using the word 'unironically'.
@@ZeddisDead How about please stop policing other people's words :P
@@ZeddisDead why would someone need to stop using a word that is used by many?
This is fascinating... I loved how you taught the AI to drift and then once it learned how it kept doing it without the rewards applied to the action. It shows that it was still focused on the end goal but using skills it was taught when they would help it achieve a faster time. Really amazing to watch.
Watching your own creation grown and outperform you has such a paternal feeling its amazing.
Father of the AI uprising 😂
i know right well your dad wouldnt feel that tho
...future generations of humans, if any, are fucked lol
@@BigBoycashhSounds like projection
This project is still (I followed it for a long time) one of the most interesting projects of UA-cam. It is indeed fascinating what a smart guy with a laptop can do at home… I’m an engineer and passionate about gaming, I couldn’t even think of how to accomplish such thing. Congrats to the author! 👏🏻
proof
@@dman0odman267?
code bullet is another great youtuber who has done very similar videos
Just follow a cursus AI.
It's easy.
Looking forward to seeing the AI take a shot at Deep Fear one day
Its crazy with enough practice really think it has a good shot of completing it. I mean with like a toonn practice
@@mifluffy5196 The trick would be to run the simulations faster than real time on a server farm. I know it can be slowed (Riolu!) so I'm sure it can be sped up as well? It would be amazing to see where this could go.
@@inthefade The true challenge would be the pathfinding. An untrained AI would take billions(if not many factors of 10 more than billions) of simulated years to figure out how to drive the map. So you would either have to start the ai with a reinforced path like Muddas run. Or have a godly pathfinding algorithm. Otherwise the AI would just start driving the wrong way.
@@MrMeasaftw There's ways of doing it
@@ChuckSploder My first thought was to structure the layers in a way to take location data. This way the network will respond differently depending on where it is in the map.
Seeing this project come to fruition after three years is incredible to me. I watched from the very beginning and thought it was amazing how you could teach an AI to be the human, and now that we actually get to see it put into play is amazing. Love your work so much, and I guarantee that it will only improve itself from here! I have tried to beat the AI myself and it really has progressed so much. I cant wait to see pro players actually holding a challenge to this AI after such a long dream. ❤
Your meticulous dedication pays so many dividends. Keep doing what you're doing, it's insanely entertaining and informative. I guarantee you're persuading millions to become programmers, AI experts, ect
The sheer work and talent this must have taken is insane. Good job, and congratulations!
Fr
This was so much fun to watch! Thank you for putting it together, and posting it for us to enjoy!
100th like, have a great day bro
Well done with this video, dude! So much work must've gone in and you present it so nice and calmly and... Dunno. I haven't played TM for a decade but what a fun watch!
Now we want a race between programmers to see who makes the fastest AI!
You mean between AIs who makes fastest AI?😂
there's several different robotics competitions that do this :D
@@zalmarzalmar3835 I think he meant a general purpose AI which makes programming AIs that produce advanced driving AIs :)
It would be cool on real tracks with real cars.
Should we put people in them?
i'm glad you're back man
this is genuinely so impressive
Cant even imagine how painful and time consuming it must have been to create this amazing video. Crazy good stuff!
And imagine how long it must've taken to create the AI
This is easily one of my most favourite videos on the entire UA-cam. The topic, the storytelling, the editing, everything is awesome. Well done!
Shoutout to the one iteration of the AI that managed to fuck up so spectacularly it went over the railings at 2:33
agreed
Fr
Reminds me of the AI that was learning to play tag, taught its self how to clip through the map and stayed out of bounds so it couldn't be touched
I relate to this AI
@@1stTitanProductions Just like me fr fr as a kid. Designated spot to play tag? Nah, Imma hide out of the area.
this the channels that really need to be awarded for the amount of hard work put in
It’s called views
how the hell does this channel below 100k subs with this level of dedication, nice editing, and beautiful data display? This feels like 1M subs kinda level
mainly due to its a game that has had a total of about 10million players over the last 20years , and not a major title like CSGO etc. . and about 36% of those have seen this video ! @@azultarmizi
Three years of struggle
AI edited it
As a an AI engineer who played lots of trackmania growing up i thoroughly enjoyed this video. Great editting too and great to follow your journey and thought process!
The video production is just best. Appreciate the amount of work you have put over the years.
16:19 I love how the Ai found a way to abuse the crap out of the reward system 😂
it got too smart we gotta disable the breaks again
it was to be expected. When you ask an algorithm to find the best way, it always will according to its capabilities and limitations. Not limitations = problems^^
It's how humans approach problems too. Like in teaching jobs that incentivize pass rate over anything else, teachers will prioritize passing students over making sure they actually understand the material.
Same with support centers. You award the agents on number of tickets closed, you will get tons of closed tickets, but the quality will suffer. @@benknoodling3683
so true XD
Really interesting approach to have the AI train with a "wrong" reward for a while to overcome a local maxima hard to find ways out of otherwise. That feels like it has some really good parallels to human learning, where a good teacher can help you immensely in how quick you learn something new. Or how athletes sometimes train using special limitations or disadvantages to improve their ability in specific cases in their sport - all to be better when they go into a normal competition. Awesome video.
Same principle in life. You can learn from mistakes and failures and take away positives from them. A full technique may not work but partial techniques can be applied in certain situations.
It's the foundation of bruce lee's martial arts. Learn as much as you can, use what works best. There's not really any downsides on iteration.
There are many cool approaches to fix local maxima problems, usually inspired by real life processes. For example one method is inspired by how a heated metal cools down and the excitation of the particles, the algorithm has a large chance of picking a random option instead of trained one and that chance decreases logarithmically as time goes on, to not get stuck at first but expecting to settle as time goes on
it's precisely the définition of training
Simulated Annealing yes but that is a very basic algorithm. @@xXErr4rXx
Yeah, it's basically what we do when we practise a specific skill that doesn't grant satisfying results on its own, but will help improve the bigger work.
The AI journey feels like a lesson in consistency. It was kicking your ass even without the brakes, and that was purely through being so thoroughly consistent in the corners. The final version with the neo-drifting was even drifting much cleaner than you were, too. It was a joy to watch.
This is absolutely fascinating. Well done, sir. We thank you for your 3 years of service!
definitely subscribing. Ive seen a few of your other videos but i figured training the ai was going to be a one off thing but the fact you plan on teaching it more (hopefully a lot more) is super exciting. Ill be counting the days :)
The fact that on the endurance, the AI clipped the back wheels to turn sharper is very cool.
Color shading the cars and the training progress bar is a really nice edit! It's not always exciting to see an AI drive badly, but watching the yellow to green is pretty great anticipation.
The editing and the video is very nice man thanks
The editing is amazing and I LOVE the animation renders to show the ai's many attempts
To make it more robust, it could be interesting to add a variety of extra conditions:
- you are already spawning in random spots but you can also spawn in random states, i.e. in different orientations and speeds, possibly including upside down so it has to learn to turtle and recover (where that is possible)
- in addition to the lateral motion reward, you can try somewhat randomized state rewards (reach weird parts of the state space)
- or even exploration rewards (you can basically do a coarse histogram of all possible internal states and then reward it for even coverage of that histogram. Rather than as fast as possible, it should be driving in a way that finds as many states as possible while still finishing the map.)
- or action constraints (disable breaking from time to time. Disable *forward* from time to time so it has to learn to deal with backward driving. Maybe occasionally even disable left or right. It's also possible to do "sticky actions" where you just randomly make it commit to an action for a few frames rather than being able to change the action every frame)
- or senses (disable some of its inputs either by zeroing them out or by sending random noise through them)
- or road conditions (you already mentioned those so I'm guessing your next video is gonna tackle that)
- or physics (you also mentioned this as well)
People have also experimented with a very weird robustness strategy where you basically add spurious inputs (they just get noise as input) but then *shuffle around which input corresponds to which value* so the AI has to learn to spot patterns in the inputs to figure out what those inputs likely mean before actually acting upon them.
All of these together, or even just a solid subset, should make for a really robust and multitalented AI that can theoretically achieve just about anything that can be achieved in the game. Like, in terms of finding the state. Not necessarily yet in terms of beating world records. And then, once you have that, you just finetune that basic broadly capable AI without any of these constraints on any map you like. It'd basically be what you did with the drifting here but training it towards much broader capabilities as a starting point.
good idea
Can you explain the spurious inputs strategy further please?
@@owendeheer5893 So I don't remember what particular network style they chose for this. I think it was a recurrent neural network? But anyways, the basic idea is pretty simple:
In addition to all the inputs that already are there, you add a bunch more. (Say, three more neurons or whatever)
Those extra inputs simply get fed random noise, so they aren't going to be meaningful to the training what so ever.
But the twist is, that you then ever so often (after, say, a second) *randomly swap the order* of those inputs so the layer after doesn't know for sure which input is which. It has to figure that out based on the received signals.
That way it learns not only to relate input patterns to output actions, but necessarily also what typical input patterns look like. It has to work way harder and "pay way more attention" if you will, to still get a meaningful result.
IIRC the idea was, that you can use this to make it possible to extend the network after training. Like, instead of noisy inputs, you can then add additional actually meaningful ones, and there is a chance the network manages to generalize over those additional inputs.
To be clear, I don't actually think that particular augmentation would be useful here. Most likely, it *could* be, but it would require a larger network just to allow for the overhead of internally swapping around the data to be routed correctly. IMO the most powerful ones I mentioned are likely to be the road conditions and physics tweaks alongside the histogram over state space (that's somewhat related to Map Elites, although that's an evolutionary algorithm so not quite the same. I think there is a variation of that which is meant to work in this setting though. Differential Map Elites or something?)
Simply increasing diversity (i.e. training over multiple maps at the same time) is also likely to give rise to gains, especially paired with a curriculum so easier maps are experienced earlier and more challenging ones later. (Very challenging maps initially will only cause a lot of noise slowing progress even on easier maps, until the AI acquires some basic skills. Definitely don't train it on Kacky maps right away lol)
@@Kram1032 "But the twist is, that you then ever so often (after, say, a second) randomly swap the order of those inputs so the layer after doesn't know for sure which input is which. It has to figure that out based on the received signals.": This sounds like an attempt to force location invariance (it doesn't matter where the signal comes from just the relative strength between the signals as a whole). Which seems only usefull in cases where you want the network to learn statistical things, or you want to force the network to encode information in certain "interesting" ways.
@@someonespotatohmm9513 yeah, as said, I don't think it'd be particularly useful here. I just thought it's a fascinating idea.
it's so fascinating to see people with such skill and passion do something like that
Thank you for that beautiful sentence.
Impressive.
I'd be interested in the feeling you had when you finished this video and uploaded it. After so much time, tries and struggle.
Congratulations.
You did awesome!
Absolutely incredible story telling, topic and editing, Yosh!!! 🔥
Yoshhhh
Yoshhhhh
i have been waiting for this video for a year now and i must say im more then surprised by how good it is! Incredible editing, voice over and content!!
I hope this goes viral hard!
labonba
real
Hey thanks Labomba! I've always been very inspired by the Trackmania k-projects of a few years ago, to make these videos. I've watched the one on your channel several times so I'm glad you saw this video ;)
@@yoshtmHope we finish the 100k this century haha
@@L4Bomb4 Can't wait to see that!!
I feel so sad about that one who fell off at 9:01
This is the third Trackmania documentary I've watched on UA-cam. I have never played Trackmania. I do not play racing games at all. Yet these docs are incredible!
I also haven't ever played but have watched like 6 or 7 videos that are over 20 mins long. It's fascinating.
This editing has addiction potential and Your work might change trackmania forever.
Round. Of. Applause!
I hope Wirtual does a followup. Would also be fun if the devs release an AI medal on each map
Wirtual should talk about how huge the implications are in speedrunning. It could be used as a new cheating method that could potentially be harder to detect than TASes if it gets advanced enough (actually now that I think about it its tells are probs similar if not the same as TASes’ tells)
@@goldenwarrior1186antagonist AIs can be trained that detect a certain probability of it being AI driven. Kinda similar to all the online apps that tell you if a photo is AI generated. I think those need the training model though to work right.
Since you reminded him, now I'm sure that for now AI couldn't beat speedrunners, because they used so many complex tricks, like a bug jump from a nose, etc. To make AI even learn that this is not a mistake, but a feature would be very hard.
@@XCanG About this, if you want the AIs to do anything, you give them some reward, then, save the ones with a higher reward and scrap the others.
@@XCanG true but not many maps can make use of those "features" I don't think
THX FOR ALL THE DETERMINATION OVER 3.. WHOLE.. YEARS to make this video
At the end of the AI training, it always seems like you just did a few logical changes and then it worked. Coming up with these logical changes is the really hard part. Great work!
Seeing the AI plan it’s most efficient routes was like watching a fluid dynamics simulation, in fact it probably literally is a more accurate representation of such than a lot of CGI engines can perform or “animate”.
Amazing description of how complicated, difficult and challenging is to be an AI engineer. Appreciate that you shared all ups and downs (especially) of such journey. Complicated things take years to be made, but in the end it pays off.
hooked till the end. Amazing work explanation and visuals man
This is the spirit of coding, and CS in general. Trying new things that you aren't certain about. Learning by doing. Thank you for putting in over 3 years to make this masterpiece.
BTW can we -steal- *borrow* the training data pls?
You could say he did a bit of reinforcement learning himself
Good catch!@@sahajramachandran348
This is fantastic! The editing, effort, and time put into this are amazing. Congratulations.
The main problem with the final test is that this part of the map was still contained in the training data, and for that it means it likely doesn't generalize and instead was over fit at least slightly
Slightly? Bro the whole thing is overfit lmao
It doesn't seem like a problem, just a known constraint on this solution.
Technically speaking: say the AI trains thousands of times on each track in existence. It has the capacity to store track-specific data and total, general experience training data. Would it be over fit or appropriately generalized for its purpose?
Philosophically, is it that much different than a human player with "favorite" tracks, ones that gave the brain the best feelings to repeat and learn?
Pretty sure that wasn't the point of the final test, as proving generalized success was already determined to be a much harder task. The final test appears solely to prove if the ai truly was driving/performing faster than him or if it was just its consistency giving it an edge due to all the human errors and mistakes a person would make in such a long track.
So the final test was realistically just meant to be the exact same test as the full map, but with the potential variable being changed being himself as the player. As it seemed more likely a scenario to further "perfect" or at least minimize as many mistakes from their own times on a smaller snippet of map..
Which wasn't exactly a viable option to avoid so many mistakes on such a long track, with many opportunities for mistakes. Which is also likely why he mentioned that there are surely far better players that could beat the ai on the final smaller track snippet, but he had doubts they could beat the ai on the full size track. As on a track that long any human is bound to slip up and make mistakes with that longer frame to go so far without any mistakes, giving the ai a chance to catch up.
While still possible, it would take a lot more practice and luck.
So the task of further generalizing and making the ai successful on any track, is a task and test he hasn't actually truly delved into and realized was fully beyond the scope of this video and will likely be tackled in the future.
So the final test really did just serve to further demonstrate the ais biggest advantage was consistency and before training it to drift, it wasn't actually driving faster or "better" at all, it just wasn't making the countless human error based mistakes that a player would. Since once transferring to the snippet map and practicing out as many mistakes as he could, it began to close the gap and was being outpaced/raced by him.
Then the drift training proved to again close the gap towards actually driving more effectively, but still appears could be outpaced if a player was able to pull off their best version of a perfect or no mistake run.
Since like in speedrunning, its easy to have an incredibly skilled player that is well practiced and capable of playing the game quite well, but runners will often have countless attempts, given the numerous frame perfect or otherwise precise moves needed and they have to make all of these numerous separate and precise moves all in the same run. So if you have an ai that can flawlessly and consistently pull of those specific frame perfect moves every time, you could say the ai is faster, but the speedrunner is still likely playing the game as a whole better and the gameplay between those moves is likely faster and better performed, but its easier to make those individual move based mistakes and if they were removed from the equation, the player would likely be faster than the ai with their overall gameplay.
So the final test was just trying to remove that same mistake based advantage to the best of their ability, in hopes of improving overall fitness/skill over mere consistency.
@@jsanchez23 thanks bro gonna use this for my machine learning class essay
@@falinoluiz5962 Not sure if sincere or bait, as I do realize how long in the tooth it all was and I'm not sure if i really said anything of merit. If you are being sincere thats cool, if not, I get it.
amazing video, amazing editing
The drifting in particular was very smart, and not something I could've quickly thought of. Please make more, you're a great informative story teller.
I loved how you show exactly what a real, practical use of machine learning ai looks like and the pitfalls associated. the journey was very interesting. great video
your editing skills are outta this world man, what a fun experience this has been. Thank you for this video!
I love how you depicted machine learning in this video. A lot of people think AI is this all powerful tool that as long as it has data it can do anything. I like how you demonstrated that AI is more like a baby and that you really need to hold it’s hand every step of the way to make it effective.
Seeing every single attempt made by the AI is so satisfying. And its all color coded too. I love it. Its also really cool watching it slowly understand exactly what its doing.
this is one of my favorite deep cut series on youtube. really impressive work
I like the color coding!
Also, I think your idea to give it a specific reward for a certain skill, then taking that away once it mastered the skill, is really genius! Did you get that from a paper? It definitely belongs in one. Kudos to you!
Yeah, I guess that’s like when humans break learning down in to specific skills for a period of time before putting it all together again later.
It’s like positive reinforcement when teaching children or a dog. You reward it for doing the right thing until it does it naturally. Pretty cool
That's why I like UA-cam, this video was just amazing, and Kudos to you for putting such an effort
Really insightful video ! What a comeback from the AI 😮
Really great work, and guys like you are going to be talked about in 10 years as the pioneers of training AI in various applications. Very inspiring
For the AI to be able to reintroduce its new isolated drift skill to its arsenal after the reward was removed is so cool to me. I feel like that is such a massive way to teach AI new skills. Doesn't that imply that you could isolate a thousand different skills, and then have the AI incorporate it all into something massive?
That is in fact how we learn. What I find interesting is we seem to be mapping out the process of learning in general, by having to create it from scratch with AI. Videos like this have taught me how to teach myself new skills, though creating a reward for random things is still not easy.
I really find things like these impressive and very Interesting how ai works when learning😊
❤❤😊❤😊😊🎉😊🎉🎉🎉🎉
❤Truly truly i say to you all Jesus is the only one who can save you from eternal death. If you just put all your trust in Him, you will find eternal life. But, you may be ashamed by the World as He was. But don't worry, because the Kingdom of Heaven is at hand, and it's up to you to choose this world or That / Heaven or Hell.
I say these things for it is written:
"Go therefore and make disciples of all nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit, *teaching them* to observe all that I have commanded you; and behold, I am with you always, even to the end of seasonal". Amen."
-Jesus
-Matthew 28:19-20
These are the ultimate UA-cam videos. Secretly educating you in some way disguised under a very entertaining video.
I loved everything about this. The story telling, the pace, the visualization, the editing, AI. Absolutely brilliant video. Keep up the good work
Excellente vidéo ! Ça fait plaisir de voir quelqu'un de passionné qui fait en plus un gros effort sur la forme ❤
I have been waiting 3 years for your update on jumps and generalized maps, I will be there in another 5 years.
Congratulations on your great work, this is something I also tried to do in 2017, without any prior ML knowledge. I failed and abandoned the project, it's very hard to accept failures like you did in these 3 years, but the rewards are awesome!
this is probably top 5 videos ive ever seen. ive never been so interested in something while watching. gg's dude
Very good video quality, very well explained,
calm voice, insane skill, legendary editing, and high intelligence from this channel.
One of the most qualitative videos on UA-cam, it really feels like you put A LOT of effort into this
As a non-trackmania player who’s been keeping an eye on this series over the past few years, it’s amazing to see the AI drive so well! Amazing work!
same yeah trackmania seems cool but I've never played it much. I was super happy to see the ai was able to beat them after all these years tho!
I cant wait to see generalization. I hope we can see a full ai playthrough of tmnf at some point if this keeps going.
Yes. Beyond million view title.
Just hope it doesn’t generalise too much or it will push us humans off the planet
There's a streamer on twitch that already does this with the Name: PedroAITM
I don't know how well this approach works in terms of generalization. I have serious doubts on it working on any maps that are designed different from the twisty horizontal characteristics of his current test maps. And if he manages to get them working on different maps it will become much slower on the current ones.
@@timbraska6750 I think for the AI to drive well on a variety of maps there needs to be some solid foundation to start off with, then some training on individual maps. This is also fair to human play since we play maps over and over to learn them specifically, while having a lot of experience in general to back that up.
This video must be seen by all. Just so people can get an idea of how AI works through trail and error. How much we can be at a disadvantage against AI. This video is insane. One of my faves of 2023.
Hmm, did you see an early release last year? :-)
oh its cool now, just wait till cute trackmania ai grows up and is chasing you around those corners IRL
Félicitations pour tout ce travail accompli. Je suis persuadé que si tu continue en cette voie tu finira par obtenir une IA performante sur n’importe quel type de map. Qui peut-être même que ça finirais par attirer l’attention de Nadeo ;)
Un Français sur une vidéos américaine ? Étonnant !
Qui est Nadeo ?
c'est parce que yosh est français
Ah je savais pas@@lzjzj3jke90
@@Ciboullete vu l'accent c'est un gars de chez nous
One of the best UA-cam videos I’ve ever watched. The time and effort this took is commendable to say the least
i don't know how I ran into this, but amazing way to showcase AI, how it works and a very passionate player going the 'extra-mile' to make things they love, even better. true mastery of a subject. well done!
From my knowledge the input to the NN isn't nodes with "Distance to centerline" or "Next Curve Distance" it should be the pixels on the screen, the AI then take this input and extract the feature in a latent layer (if they are relevant) and the output of the AI is the input to the car controller (gas, breaks, steer left or steer right). The idea behind it is that some figures might not be reachable by the agent (if you want to make a fair agent that plays "as" a human), and some of the figures might not even be relevant to the agent.
The actual input to these reinforcement learning is the pixels on the screen, from those the AI "learns" what works best in these situation, and then generalize it to "situation that looks like this" so if the AI sees pixels that look like a curve is ahead, it would learn to slow down if that feature (curve ahead) is relevant to its performance.
About the AI "loves crashing into walls" you should define a new reward function to maximize, look at say 2 minutes period, make it max distance (on track) from start, and don't even look at "race finish" at all, so for example if a race should take 3 minutes if played perfectly, each training session would be 2 minutes and the best score would be the one that reached the furthest (that's how I'd train agent for a racing game).
About the generalization, the "pixels as input" approach would be much better because like NN can be train to recognize things, it can recognize map's curvature, it's own orientation, and things that u wouldn't even think about adding as data, things that you wouldn't even think relevant. Also, a minimap is included so if it "decides" during training that the minimap data is relevant it will get the data as a feature in a latent layer. Also, pixels as input approach can be a general AI to any game, the only thing that you would change is the reward function to maximize.
9:44 This editing was sick.
10/10
I’ve been following this saga since the beginning and I couldn’t be happier with the conclusion! I can’t believe I haven’t subbed until now, keep up the awesome work! 🎉
Je sais pas comment t'as fait ce montage avec tous les différents passages de l'IA et leur dégradé de couleur...je salue la représentation que tu as donné du programme, c'est très graphique et ça permet de visualiser l'intérieur du programme. Très inspirant. Merci
il est francais non ??
@@cdv_974fpsb3à l'accent on dirait
Il utilise aussi le drapeau français sur sa voiture
The cars might represent colour based on how many reward points they acheive on average.
@@cdv_974fpsb3 il a un très bon anglais et un très bon accent mais on reconnait les touches de français 😁
It’s really impressive how patiently you’ve tried to train the DRL agent at the highest level. Please make more videos about how you systematize your workouts.
Bravo pour ta détermination ! Quelle aventure. La narration et le montage sont dignes des plus grands. Encore une fois bravo pour ta persévérance. Ne lâche rien et les résultats viendront d'eux même !
D’accord!
Ouais!
This video was shown in the latest linus tech tips video about the LG Wing. Congrats dude
What an amazing effort, I was most blown away by the presentation and editing! Well done.
Halfway through I just want to say I am thoroughly enjoying the quality of the video itself. Master fully made.
I am studying in the fields of deep learning, neural networks and this was one of the most well put together and well explained video I have ever seen. Incredible graphs, and explanation. Thank you
How is this not just trying all options with mass computation? I don't think there's anything that can be considered AI.
That’s incredible, it’s like watching your student grow and it’s at the point where it finally surpassed you.
Chouette projet, bravo !
One area I can see this AI being really useful is looking at where you yourself can be faster. In the final part where you show it head-to-head with your own best run, there are a few places where it suddenly leaps ahead of you by several tenths of a second in a single corner. Analysing exactly what it does there could show you how you can avoid losing those 10ths yourself.
That is true. This is exactly how AI is being used to advance the top levels of competitive chess and go.
Both the animations and the ideas are so beautiful and well exectuted, it managed to surprise me even after watching many ai-mastering-games-videos. Good job!
Ça me donne des frissons... ça me rappelle mon projet de recherche de prépa, j'avais entraîné une IA à jouer à Mario Bros grâce à la théorie de l'évolution ! Des heures passionnantes...
Absolutely amazing story telling! I loved the use of color to easily demonstrate progression. One of my favorite aspects about this video is that you actually used your AI on multiple tracks and introduced generalization. I feel like too often people will train an AI to only be good at one specific task while ignoring other scenarios. This is one of the few videos in which you're actually training your AI and having it *learn* how to do something instead of just having it memorize one set of instructions. Memorizing ≠ Learning and you did an *amazing* job having your AI learn. Kudos!
Amazing work man I cant wait to see the next installment
Je ne joue pas à Trackmania, mais le travail que tu as présenté est monstrueux. Félicitations
Idem =)
as a french beginner, i understood this (except for monstreux and félicitations) so im proud now
Un jolie accent français en plus de ça mdr
félicitation = congratulations et monstrueux is like huge@@cewla3348
@@cewla3348félicitations=congratulations monstrueux=monstruous but it means incredible
Dude!
I'm so happy to see You back!
I love your content and i can't wait to see AI trying campaign maps haha
beautiful video! really combining both of my biggest interests: games and doing overly complicated stuff with computers
I literally had my jaw on the floor when I saw it learn the neodrift. What a seriously impressive achievement you've done with this AI - well done