Deep Learning Cars
Вставка
- Опубліковано 22 жов 2016
- A small 2D simulation in which cars learn to maneuver through a course by themselves, using a neural network and evolutionary algorithms.
Also check out my other project "AI Learns to Park":
• AI Learns to Park - De...
Two AI fight for the same Parking Spot:
• Two AI Fight for the s...
Interested in how Neural Networks work? Have a look at my one-minute-explanation: • Explained In A Minute:...
This simulation was implemented in Unity. You can find detailed information about how this simulation works, as well as a link to the entire source code on my website: arztsamuel.github.io/en/proje...
Don't miss any future videos, by subscribing to my channel.
Follow me on Twitter: / samuelarzt
#MachineLearning #Evolution #GeneticAlgorithm - Наука та технологія
Check out my new video! AI Learns how to parallel park: ua-cam.com/video/MlFZjLkEIEw/v-deo.html
Hey, I’m trying to learn this kinda off machine learning witch course do you recommend?
Please explain how the fitness value of each car was calculated?
@@TuanAnh-mq6sw Each car's fitness value is equal to the percentage of track completion. Since that can't be calculate by simple distance to end point, I placed several "checkpoints" throughout the map. It's pretty straight forward from there.
@@SamuelArzt Thank you. I understand, because i think if fitness value only based the distance, cars has trending to rotate around in their place.
@@SamuelArzt deep learning or just a complex genetic algorithm?
I knew the green car was going to win
whoever lead turns green
Gr0Us3da4 I know, this is just irony
whooosh?
@@vasco2016 Did you mean joke?
@@johnnyace1086 shut up redditor
Here's my takeaway : no matter how many generations have passed, there will always be idiots on the road driving backwards
Here’s my takeaway: sweet n sour chicken balls with extra sweet chilli sauce, basmati rice and some prawn crackers on the side
@@keir_murray6567 i like your words magic man
The People you have to share the road with are insane
@@kelvinyusuf6658 *of*
@@keir_murray6567 same for me but I don't like prawn crackers
1:25 that was so hype
And then he died
🎵 dejavu 🎶
2:39 "All hope is lost!"
2:43 "Not on my watch!"
That's how I've earned my driving license.
Smashing into walls repeatedly until figuring out how to not smash into walls repeatedly?
Oh Spongebob... Whyyyyyyyy...
Nice. You died 46 times to get a driver license
Galaxy Protector better than me, I died 89 times to get my drivers license
XxNexusxX better than me, i haven't got one yet
245 generations later..
Cars found out that getting out of the track was pointless, now they're building a city in the spawn area
And this was Cars prequel.
They gained sentience
Creative 😂
Now they pass turing's
Ending is tragic, tho - they've found out they have built a New-Jersey
I found it comforting to discover that even machines make mistakes while learning.
Typically because the human programmer can't teach or use logic.
Machines are only as smart as their creator.
@@aliensarerealttsa6198 2nd line is untrue. With enough training they'll eventually outperform their creators and the code will no longer recognisable. Ex: UA-cam algorithm.
True, machine learning learn through their mistakes during tests
@@prateekpanwar646 it seem u understand pretty well, I have a question on why don’t the other cars follow the track of the green one
@@hispantrapmusic301 because if the green one dies they all die, they need go as many different ways as possible to have the highest chance of success
2:39
When you're not the fastest, but you are the best
Generation 420: they learned to drift and eurobeat everywhere
lol hehehee
Lmao
DEJA VU
@@citaderu9323 non.
Running in the 90's
Some brave individuals refuse to do what you force them to do, they just crash to the nearest spot right away. They are heroes of their kind, standing against the system.
Not to ruin the fun but it's just a genetic algorithm bruteforcing all possibilities of the matrix. When and if they had a mind of their own we would have achieved general intelligence... stay tuned
QuickMix wow, really? I thought we were creating and then killing real intelligent species.
Hey, you called them individuals, that means they have their own opinions and that requires intellect... Just sayin'
QuickMix And that was the joke.
I tend to overthink things, pardon my superior neural net.
0:15 Generation 4.
Me and my pals graduating from online classes
0:16
Imagine standing in traffic and your car says: "Deep learning protocol started"
I've never wanted a rectangle to go through a tiny gap so badly in my entire life. This is great!
That's what she said
@@Krawna ur pp = rectangle?
@@goblokidinahui4420 if you censor it
@@goblokidinahui4420 a block head
I felt bad for the car when it fiinished because it seemed to just wander in circles not knowing what else to do. As of if to say, "what now!?!?! My existence has lost all meaning!"
Pixels Everywhere that’s what happens when you achieve everything you ever wanted
When you get what you want, but not what you need
"One must imagine Sisyphus happy"
STOP
@@alejandrogarcia-puente6948 wow man real shit so deep bro
I love how utterly confused the cars get when they exit the track haha
"where... where is road???"
The cars that just go the wrong way instantly and crash are my spirit animals
I'm rooting for the green car
The frontmost car always turns green.
Vulcan Viper
*whooosh*
Vulcan Viper
WOOSH
+Swimming Swampert
NOOOO I've always wanted to woosh somebody!!
go onto twitter find some idiot that likes to correct everyone say "go commit die" and bam you got a woosh
Wow I loved this
Thanks, Destin! Hearing that from you means a lot to me.
I really enjoy your videos and have been a fan of your channel for a long time!
@@SamuelArzt it was a great visual. Good work.
@@SamuelArzt Oh! I didn't really realise it was Destin's comment until I read "Been a fan for long time" and then checked. :P
@@johnmctavish1021 yup
Wait why does Destin doesn't even have 100 likes?
I love how when the cars got out, they were like “well wtf do we do now”
it was dancing from happiness
it's amazing how quickly they can get so much better; in gen 1 every car crashed before there were any large turns and by gen 13 many were getting far.
That last car in generation 15: "Oh God I have no purpose!"
This is humans in the future,once machines are doing everything for us.
I don't think so. We'll likely just move on to the next non menial thing. The industrial revolution and automation destroyed _jobs_ not the job market itself, and that era compelled an overall expansion, the AI revolution will probably result in the same. There's more to life than Eating, Copulating, and Working 9 - 5.
I'd be fine with the first too if you add sleep :p
"You pass butter"
@Kerimcan Ak(Sionistas Fuera!) That's humanity's goal as far as I can tell
2:44 when you graduate college and enter the promised land of jobs
Baseer Siddiqui “It’s empty!”
L0L. but, true 😂
Lil U turn first lol
True lol
Without jokes: The A.I is actually unable to detect anything since it only detects walls, so it doesnt know where to go.
2:43 P1!P1! Great job man, well managed. Absolute masterpiece.
Get in there Lewis.
@@XenophonSoulis pls lewis, dont get in there anymore.
@@wwee1r951 It's not like I like Lewis winning, but that phrase is pretty iconic.
@@XenophonSoulis i know man just kidding xD
This is more intense than watching the dvd screensaver.
Under rated comment
Nothing explains a concept better than showing its application in progress. Fantastic video.
Lovely. Well Done.
I don't know why these are so pleasing to watch. That, and this literally looks like an iRacing start of race with all of that crashing.
I love how most of them just smash into the wall immediately lol
Some of these cars are just built different ig
When they escape do they take over the world
Yes.
Yes of course.
Shhhh... don't hurt their feelings.
No, they keep driving on and wondering why it's a wide open world.
it's not wide open, they will find the overflow boarder.
One of them might ask "Hey guys! Do you think this could all just be a simulation?"
While the others answer "Pfff... don't be silly!"
These are just illustrated statistics from a random sample of drunk drivers.
lol!
Lmao
Fairytale Overworlds trying to soberize
Are you a Nihylist?
and the "X" are the people..
This has been done so many times, yet, it's always interesting to watch, I WANT MOAR (talking to you youtube algorithm)
Gen 4 was really efficient at reaching a wall.
I cheered out loud when the first car made it all the way through.
Orders an Uber
About 30 Ubers crash into the wall next door
Yay deep learning!
They will first do 200 iterations on virtual cars and then implement the algo on actual car.
@@zeeshanahmadkhalil8920 just 200 I bet they will simulate 1000+ times with all the possible roads available and traffic then only it can be practical
Because if only few accidents happen because of this then then everywhere it will be banned 😂
Order from us more often or we'll crash your house
@@random-0 I think they'd censor the news and try fix the holes in the AI while selling it as usual
Its all fun and games till the cars start reading Socrates
You mean the philosopher who never wrote anything?
@@edmundironside9435 He never wrote anything himself but his students wrote down his thoughts and lessons
then they would hate diplomacy cause we humans are idiots i think we woul all die then
I just watched this for no reason and I’m sure I will again when it pops up in a few years
I really love how they're just spinning simultaniously after beating level (you can see it for a moment). Clearly it's happening because without obsticles in their sight, networks input is just zeros and they have "no information" whatsoever (one single input value) to make different decisions so they're just spinning not "understanding" what to do.
Great job. Simple but smart.
Simple?
Neural networks actually are really simple, but the concept is a bit difficult to grasp. It is basically just trial and error, where each 'node' is a variable that it is trying to maximize or minimize to try to maximize whatever the final expectation is.
I can’t help but imagine Mario Kart bots doing nothing but ran into walls for literal weeks to develop the bots
nah they make a path for the bot
Hello 5 years later, and it is still Amazing dude!
This was so hypnotizing to watch. I like it!
I really appreciate you taking the time to comprehensively answer the questions on the comments. I also appreciate that you wrote this from scratch. Well done!
Thank you for the kind words! That means a lot to me.
Hey there, this is an amazing learning opportunity for me. Your video inspired me on an extremely important project, and I used the source code you shared ,a lot. Can not thank you enough.
Thanks for the kind words!
Each turn is a “learning curve”
The slower you go -- the further you get. Nice job, man!
Not always. I too thought so, but have seen some instances where even slower cars crashed earlier. I think it's an optimized speed that matters.
@@ibknl1986 да он тупую русскую пословицу перевел на англ, не обращай внимания
If you placed the final generation in a completly different track would they have to learn from scratch or would they be able to apply what they've already learned to clear it much faster?
They would be able to clear it much faster. If the new track does not introduce any fundamentally new features (such as u-turns or gaps in the walls) they should be able to finish the track right away.
What were you using for the five input nodes? I know they were points, but was it just the distance of these points from the car?
I think they were collision indicators. 5 points ahead of where a collision would happen for reference on guiding.
The five points you are seeing are just the current reading of the five distance sensors of the car.
Each car has 5 sensors which measure the distance to the nearest wall. The readings of these sensors are the input of the neural network.
The blue crosses are simply there to visualize where the sensors are currently pointing.
Did you use an open source neural network or code your own? I was surprised to see such good results in the first 10 gens. I was expecting it to take longer for even one car to finish the track.
It's almost human like, we try we fail, we try we fail, until we perfect it. Awesome video!
This model is great to learn how deep learning do.It looks interesting!
That's actually really interesting how you used multiple cars in each run. Really cool
They: what videos to you actually watch?
Me: it's complicated
I like how the cars that got out look happy roaming around and around
Never knew I needed to see this untill i saw video. Thank you
I love your simulation.
And I would love me to see some more in-depth look at your neural network, or maybe the code/project?
Thank your for your nice comment!
I am actually planning on making more videos explaining neural networks in general for a long time now and I would also like to put the source code of this project on github. Unfortunetaly I am quite busy at the moment, but hopefully I get around doing it next month. So feel free to keep an eye on my channel ;)
Cool, I would love to see it.
I'll look forward to it :)
Unfortunately, I think I still won't be able to upload new videos this month... But at least I finally came around to upload the project on github. You can now find a link to the repository containing the entire source code at the top of my website: arztsamuel.github.io/en/projects/unity/deepCars/deepCars.html
Samuel Arzt Thanks so much dude ! I start learning deep learning and it is really cool from you to share it. If you upload explanations videos I will watch them :)
@@SamuelArzt i just see your comment this year, i am new, is your video available sir? thanks
In my city people actually drive like this.
Are you contractor tho?
That is so cool! Nice Samuel!
Green car - that one promising student from our class, red cars - the rest of the class
put some music in the background and you got yourself your own 'fast and deep learning furious'
Spoiler alert: the green car wins
:p
haha
C mamo :v
NOOOOOOOOOOOOO...THANKS!! FOR RUINING IT!!!!!😒
Seriousity alert: the winning one becomes green.
I've never thought I'd be so emotional over digital green rectangle
this feels like one of those games where you control one of many characters on screen, but then like a hundred others are also playing and they control the rest.
Thanks you so much for providing your source code so that I could understand the process . I mean it from the core of my heart be blessed and all the success to you buddy
Thank you for your kind comment! That truly means a lot to me. I am glad that my project was able to help you.
Had to drop a like and sub when I seen you gave out the source code 🙌
Great to see machine learning in practice!
こういうの見てるとすげぇ勉強したくなってくるけどよく分からなくて挫折するまでがセット
Awesome video! Thank you!
I think it's interesting to think if whether they're actually learning to avoid the walls or just learning the track and trying not to hit where they've already hit before.
I'd say they're learning the track
It terrifies me to think this is in fact how Mother Nature operates--throwing countless individuals at the obstacle course of life until she hits on the few with the right combination of evolved traits to make it through. Each car that crashed represents a death--a casualty in her ruthless strategy.
Hmm I see it more like a bunch of cars thrown on a road until one of them doesn't crash
@@sf8262 F evolution bs tired with these liers
This is a decent metaphor for how technology has progressed through human history.
Why is this so satisfying to watch?!
3:13 47 Generations and still half of them drive against the wall right at the beginning. 😂
Of course they do, cars of a new generation are random mutations from best performing car(s) of last generation. The control network is mutated completely randomly, most of the time it does not result in beneficial changes, no matter how many generations you evolve it for.
@@aleksandersuur9475 so just like human beings right?
@@tamjidterrorblade Well sure it's just like stillbirth in mammals. Of course in software the mutation rate is a free choice of the programmer, so it can be set much higher than it naturally is in animals. Simpler GMO techniques for grains and such work much the same, you irradiate your batch as seeds, and sure many of them fail to even sprout, but few specimens get a beneficial mutation. And you really only care about the best performer, the tens of thousands of bottom performers don't matter in such a case, the faster they eliminate themselves from the race the better. It's basically sped up version of normal breeding, in the end you get the same result, but with less generations.
I can't wait for this technology to be used in real cars, after the initial body count, this will be way better
Yeah, ovepoupulation will end
The thing like goes and then stops and like goes again and goes like further. Amazing
I can’t believe I spent 3 minutes watching a rectangle get through a hole
It feels a little disturbing when they make it out. Like they accomplished their purpose of existence and then they just don't know where they are and why.
It's like watching sperm swim in fallopian tubes!
I knew there would be at least one person who would say or think that.
...I even typed "sper" in Google Chrome search to find comments like that.
that's one way to describe it....
Suddenly i remembered that one game where u r a sperm and trying to race to the egg
I love the burnout at the end
That donut in the end was just perfect
Generation 11:
“Alright COOL guys, we are ALMOST there”
Generation 46:
“Alright COOL guys, we are ALMOST there”
Generation year 2020(trying to improve myself):
“WHY IS THIS IN MY RECOMMENDATIONS😭”
I like how they spin brodies to celebrate when they make it.
That's so cool concept man
This was really good for explanation
This was really satisfying for me to watch. I think that the people who study, and develop technology are some kool individuals 👍🏾👍🏾👍🏾
Looks like when I play *any* Racing game , hit a wall, then click "restart race"
Love how both machine and man have the immediate urge to rip some donuts as soon as they are given an open road
Great AI you got here!
I'd like to see this kind of AI in Cyberpunk 2077
It's there, but they just left it at gen 5
This is a great example of how evolution works for life on Earth. Each generation is almost the same as the last one except with a few random changes to the DNA (from radiation, chemicals, etc.). If those changes hinder an organism's ability to survive (which they most likely will), they'll likely die off before reproducing. If the changes help the organism to survive and reproduce (and if those traits are genetic), the next generation might have those traits and will be stronger. This is how life evolved from single cells to complex animals like humans.
POV: food traveling through your intestines and finally getting out
I like the fact that half of the cars didn't even go ahead and we're like : "Imma hear out"......LoL 🤣🤣🤣
is it just learning this specific track, or is it learning how to avoid walls?
Can you apply the network to multiple tracks and reinforce the learning? what about more advanced tracks?
Instead of creating a fixed track, could you try building procedural tracks? There is a chance at least a *part* of what they are doing might be due to the agents learning the track by heart.
Yes, the tracks could be generated procedurally and also yes, there is a chance (a very high one even) that the agents are simply learning this particular track by hard. After all, if you only train them on one track then that's what you want them to do: learn how to navigate this particular course in the best possible way.
If you want the agents to generalize to other tracks, if you want them to be able to complete tracks they have never seen before, you have to train them on many different tracks. Otherwhise they get overfitted (or overtrained) on a small amount of tracks (which they become quite good at) but their generalization capability decreases.
Still, the cars shown in the video are not overfitted at that point (at least not substantially overfitted). You can even see how the cars, which were able to leave the course, learned to maintain a certain distance from walls, in order to not crash. Of course it could be that this particular distance only works on this track, or that the car only learned to keep a distance from walls to the left of it, etc. But that's exactly why you would then take that neural net and train it on other tracks as well (usually: the more, the better).
I myself did similiar simulation with 8 input neurons with values of distances to walls around the car ...as far as i can tell, there is no way this approach would make agent memorize track. I mean it learns how to steer to balance distances from walls so that none of those gets close to 0 ...there is no reason why that wouldnt be general solution because all that agent learn is rules like : "if there is wall on the left, steer right"
simple, good, to the point video,
So many sacrifices on the way to success.
From what I think I understood, the importance of hidden layers lies within the fact that some functions can't be replicated simply with linear operations (multiplying inputs by weights and adding them together), and that the squashing function (hyperbolic tangent, for instance) was the key to creating more complex functions that enlarged the neural network's search space. I may have read this all wrong, but I think you said that you didn't use any squashing function in your network.
Have you tried simulating it without using hidden layers, by any chance - and if so, did you actually get very different results from it?
Thanks for your in depth comment!
You are right that the non-linearity of neural network layers is very important. However you can achieve non-linearity with single layer networks. Kolmogorov famously proposed a theorem in 1965, stating that a neural network with only a single hidden layer comprising enough hidden neurons can approximate any multivariate continuous function.
However, many expirements and studies have shown that generally deeper architectures are superior to less deep architectures, as far as their performance and generalization capability is concerned.
I did use a squashing function, however I prefer the term activation function. I don't know why you thought I didn't, I'm sorry if I didn't state that clear enough. The network shown in the video (which is an older version) uses the commonly used sigmoid function. After a lot of research I changed the network to use the "softsign" function instead. The softsign function is similar to the hyperbolic tangent, which you mentioned, with some additional advantages. The hyperbolic tanget is also a better function than the sigmoid (at least for this application). If you are interested in the softsign function and its advantages and why the sigmoid function seems unfitting for this particular application, I recommend reading Bengio and Glorot's paper from 2010 called "Understanding the difficulty of training deep feedforward neural networks". It's not that long and I think it is quite interesting. You can find it on Google-Scholar.
I don't remember testing it with a single layer, however I recall testing it with one more and one less layer and I did indeed get very different results. However, I have to admit that back then I did not run enough test cases to jump to a clear empirical conclusion.
Same...
TEACH ME WHAT YOU KNOW seriously, you got discord? Good add me Boostio#5047
@Samuel Arzt, I think the huge difference in performance of testing with one more or less layer might be because you use an genetic algorithm for the training. Most of the research focus on back-propagation, not evolution, since the evolution is really slow to converge in comparison to back-propagation.
For an evolutionary approach the best "neural network" could possibly be [input] -> [output] without any hidden layer in between, since you still have some weights. This result in fewer parameters to tweak, and the evolution could speed up.
For more complex data it might not be possible to solve it using only a single hidden layer (within reasonable time and computational power). Face recognition for example use several hidden convolutional layers, where each layer creates an intermediate representation of the image.
The choice of tanh or softsign should not really change the performance anything if you are using evolution for the training. As long as you use a non-linear function you will benefit from having multiple hidden layers.
So when I was playing Super Meat Boy the replay just showed my deep learing progress.
Beautiful
This is a great example of how natural evolution works.
this is how sperm works.
Enter the Jouz better said: how your brain works:))
Naughty boy.
Exactly thought the same xD
Lol
Sperm would just send almost endless cars off the track hoping 1 would finish.. also a few crashed cars would "widen" the track
*Legends say they are still riding!*
Satisfaction after this video📉📉📉