I'd love to see an improved version of this. Like I'd like to see it get good enough that it can drive itself on paths like this that it's never seen before. That'd be sick.
for that tho the entire program needs to be redesigned currently it uses images to "make decisions" but to make it actually make "decisions" (predictions) it should use a neural Network meaning you give it data of you driving on different situations and then train that model so it can make predictions on what to do based of confidence level it has from what it learned from (the reference data) A good way to do this will be to use open-cv and then somehow let the ai make a prediction using vision i am currently unable to achieve this but sometime i will be able to. Or just drive around the entire world 40billion times probably even more in different cases
For those wondering, looks like about $250 for the cameras, $75 for the motor controller, $40 for the steering motor itself, $30 for the Arduino Nanos, $120 for the batteries, and probably another $20-30 for the various other electronics & wiring pieces. All-in I'm guessing a bit over $500 to add self-driving. Huge cost savings by not using LIDAR and having a laptop be the brains instead of an embedded system like Nvidia Drive. Very impressive!
What a brilliant project, it had every element of engineering, especially the ever so dreadful equivalent to “ah for got the semicolon”. I loved every second of this video and even considering doing something similar for my capstone project, hope to see more content from you in the near future
Great job! This was my intro to your channel. You’re really easy to listen to and have a nice calm approach. I can 100% relate to a simple single coding error throwing you off in a relatively complex project such as this one.
This is really motivating me to do one of my projects that would use CV for navigation. I was wanting to buy the unitree go1 and do something similar around my town, have a manipulator arm and such. Take it to the store, grab a bag of chips, then leave. Something I have always wanted to do, but havent been able to since moving houses frequently for uni. Watching this video has really motivated me to push for that, as this is why I am in university right now doing mechatronics, despite the wavering motivation to push on from the flood of assignments and exams. Well done mate, your video is a real inspiration. If I do get around to my project over summer break, you would of been a major contributor to keeping my head up in my studies and pushing for that goal. Your stuff is legendary and I look forward to seeing what you do next.
I like that project, and I like the results that you eventually obtained. It's nice to be able to run in a continuously looping path, without straying, until the battery runs out. However, that has limited usefulness. What might be nice, is that when it comes to that intersection in the park, it can be programed to always turn a certain direction, based on the route that you set. In addition, it might be nice to have a prompt mode, where it might stop and ask, if you tell it to do so. Other goals that would be nice, is to be able to save and choose pre programed routes, where you could place it at a point 'A' and it would take a programmed route to point 'B'. However, it might also be nice, if it could recognize parts of the route, so you could place it at point A + offset, and it would still be able to find its way to a destination. Also, it would be nice for it to be able to recognize where it is along a route that it has travelled before, and make it to a destination that might be in a different route, but is still in the larger map. For instance, maybe you could place in a random place on the main loop program, but you want it to turn left & go to a new destination. Maybe, it could start the loop, until it gets to the intersection, then turn left and start a new program - based on a larger program, that's outside the nest. I know at this point, this is outside the scope of what you want to do, but if you decided to expand the scope of this vehicle's capability, I'd be very interested in your journey. I am starting a similar project, that's smaller in scale. I don't intend to ride along in the vehicle. However, I'd like to eventually be able to program it to autonomously go to certain destinations from a common point, then return. Collision detection would be the next obstacle I'd want to tackle. I appreciate the video, and the work you put in.
Yeah!!!! I saw this video on both LinkedIn and Facebook but I was searching your UA-cam channel and finally got it. Really great job 🤩. Keep going, bro...
My experience with this (specifically potentiometers and Arduino) Your measured potentiometer results may very as the potentiometer ages, and ... If voltage can change or spike - that will change the reading of the potentiometer. Its a frustrating dynamic to debug. I would recommend a digital encoder, stepper motor... Or a verification step to the potentiometer results. Keep up the good work - Love it!!!!
Not to be that guy but couldn’t this be done more efficiently and more effectively if you had just made a control mapping system where you had it record the potentiometer values as you drove the desired path and then repeated those inputs back to you when played? I just don’t think a neural network is really necessary for this application. And also a better more efficient way of doing this is creating a color filter that maps the edges of the path and then have a raspberry pi average the distance between the two path edges at any given time using computer vision. You can take that data in combination with your little potentiometer steering angle decoder and make a simple algorithm to adjust the steering angle to make the car move towards the desired path of averaged numbers.the benefit of this is that it will now work on any path that looks the same as one one you made it for.
30 dollars per camera 90 dollars in web cams for self driving. Also it would need a bit more work to make it go on other paths by processing all roads in real time. Amazing video, I really want more!!
This is really cool! Only thing is, I can't see why adding 2 additional angles would help the model if you arent going to use those for prediction. I work on computer vision with convolutional neural networks for my job and my gut instinft would tell me that adding the 3 cameras should actually make the model perform worse.. unless you are doing some kind of preprocessing to simulate data as it would appear from the center camera. Like, if you took this to the extreme scenario, and the cameras on the side were pointing 90 degrees from the center camera and training a model on that to say "if the center camera sees this, you should just continue straight".. I mean your model would just want to drive into the side of the walkway. As a matter of fact, when I am training my models, one of the important things we try to ensure is that training data angles are as close as possible to angles used at prediction. Other stuff I'd point out is that, the laptop is a nice solution, however you can buy a special purpose SoC like a google coral board or nvidia agx that would use less power and take up significantly less space. I think the google coral TPU is only like 100 bucks. For a model this simple, it would probably be sufficient.. and since your model is so simple (just stay on the road essentially) and your dataset is so small, you could probably scale your images (and model) down really small for really fast predictions. The military actually trained a simple self driving model back in the 90s or something.. I think it wasnt even really a neural network.. just an SVM with something like 100x100 images as input. Maybe even smaller. This would be a really cool project to have worked on.. although if it was me I probably would take it too far and half something like a route selector, where the current route segment is an input into the model.. or maybe use gps and give a vector delta from the desired location. Assuming this sidewalk is in google maps you might even be able to integrate that A better approach might also be to make the model attempt to center the vehicle in the lane and collect data by going through phases of collection where you drive too close to the lane in either direction and then another phase where you center it. Then the model could say "im too far right" and you could have logic to correct.. kinda like you do for adjusting the wheel position with a feedback loop
Hey austin, what a great project man. I am doing my masters in AI autonomous Electric cars and i can tell how tough this project is. Making Deep learning algo, electronics & mechanical parts too. Its astonishing that you have done this alone. May i know how much money and time it took? I'd love to have some insights
Hi Austin, Do you have a GitHub repository for your Arduino code? I have 3 0:24 Ackerman steering cars at 1/10 scale. ROS1 and. ROS2 versions with nvidia sbc’s. I also built a pixhawk controlled gps 1/10 scale with LiDAR avoidance. I can’t crack the nut on path planning outside. Issues gps gets near tree. I can’t effectively fuse camera data for sidewalk segmentation and gps. I like your approach of training on the path. I know enough that the devil is in the details even if you have the code.
You are absolutely far better then Elon , ... you did it alone with an extremely limited budget , .. .. however Tesla has at is disposal with an operating budget .. over $100 Billion USA Dollars, .. I personally know, .. .. I am an Engineer with over 15 years of experience in Artificial Intelligence ...developed machine learning algorithms and cognitive software models .. Excellent work ... I did enjoyed it ....
Ramiro impecable el auto ese. En Argentina solamente vi dos publicados igual a ese llanta 19 con motor 1.75 turbo. Eran rojos también versión ti. Ya hace unos años. Te comento que tengo uno 2010 también versión ti y también rojo con menos de 20 000 km. Tiene la caja que para la época no estaba tan mal. El auto tiene una tenida increíble. En cuanto al motor si te digo que podria dar mas. Ahora veo que habia un kit de novite y, no lo conocia. Por lo que veo solo aplica al motor tbi. Buen video. Saludos
Just like that I'm done with the video and that was amazing. It's impressive enough to have the software skills, but doing all of that design, building, machining etc yourself is just insane. Props to you! Thanks for sharing all of this stuff, makes me excited to work on my own projects!
Thanks! If I avoid specific objects during training, the trained model would probably do the same. I was thinking the next step could be some sort of object recognition.
@@austiwawa eventually a unit that is trained to spot and notify with solution and alternative solution to pothole avoidance. similar to the lane change system. the other thing that greatly interests me is speedhump detection, spot the variance notify user measure the variance and prepare suspension for impact, similar to Mercedes meagntosuspension but with feedback. then using gyroscopes and flywheels we can make the vehicle do a controlled jump, speed racer style. its 2023 we should be on fusion power and beyond saturn already. can we at least make a car smart enough to do cool car things? how do we make the car jump in the first place? mantis shrimp/grasshopper leg mechanism. how do we make it land on its wheels? gyroscopic feedback and spinning flywheels on each axis.
Hello Sir, I recently watched your self-driving car project video, and I was absolutely amazed by it! My name is Clancy, and I'm a college student from India. My team and I are eager to learn more about this project, and we would be incredibly grateful if you could guide us in building something similar. Your expertise and insights would mean a lot to us as we embark on this exciting journey. We are truly passionate about developing our skills and would love to learn from someone as experienced as you. Looking forward to your response!
All the successful tries were with mostly shadows (the golden hour). You need more training in harsh lighting conditions to be able to handle the shadows.
Did you reconfigure the neural network for a single camera input or was it expecting 3 different cameras but receiving the same came at different times?
Use wider angle cameras, you'll get a better more reliable result. Also look into RTK navigation and integrate each sessions path into an averaged path as that's what all these self driving companies are doing. The really special sauce is in real time navigation using computer vision fusion of RGB, Depth, Laser, Radar, and other sensors.
You should have also implemented a simple PID controller first. Usually that helps in debugging most part of h/w level code. Still really good project. Getting h/w results is always hard 😃.
Great video. Was wondering if you could strip the ecu with the ai chip from a wrecked or totaled tesla and just install it to your mini? It would be easier, right? Thanks for the vid
lol, this is cool & all but earlier versions of Tesla’s FSD were/are open source & you could’ve just modify/extend the real deal. Definitely learned more this way though.
Instead of a switch to disengage, maybe use a force sensor to calculate how many nm should be applied to the wheel before it lets you take control. In the same manner, you can have the motor turn with to keep the force within a range. Other thought is to use progressive memory mapping with predictive path to guestimate the corner angle and adjust the maximum speed before loss of traction.
I'd love to see an improved version of this. Like I'd like to see it get good enough that it can drive itself on paths like this that it's never seen before. That'd be sick.
for that tho the entire program needs to be redesigned currently it uses images to "make decisions" but to make it actually make "decisions" (predictions) it should use a neural Network meaning you give it data of you driving on different situations and then train that model so it can make predictions on what to do based of confidence level it has from what it learned from (the reference data) A good way to do this will be to use open-cv and then somehow let the ai make a prediction using vision i am currently unable to achieve this but sometime i will be able to.
Or just drive around the entire world 40billion times probably even more in different cases
For those wondering, looks like about $250 for the cameras, $75 for the motor controller, $40 for the steering motor itself, $30 for the Arduino Nanos, $120 for the batteries, and probably another $20-30 for the various other electronics & wiring pieces. All-in I'm guessing a bit over $500 to add self-driving. Huge cost savings by not using LIDAR and having a laptop be the brains instead of an embedded system like Nvidia Drive. Very impressive!
How on earth does this not have more views!? Amazing work!
Thanks so much :)
What a brilliant project, it had every element of engineering, especially the ever so dreadful equivalent to “ah for got the semicolon”. I loved every second of this video and even considering doing something similar for my capstone project, hope to see more content from you in the near future
Thanks a lot! More content to come!
You did it! It is educational, informative and entertaining. Excellent work!!!
Thanks Hyuk! I really appreciate it!
I hope the algorithm picks up this video soon, very underrated, great stuff
Thanks so much. I appreciate it!
I cannot imagine how over the head that course you took would be for me 😅 superbly done!!! 👍👍👍
Thank you!
This is such an underrated video! I don't know why I didn't get this recommended earlier. Great work!
Great job! This was my intro to your channel. You’re really easy to listen to and have a nice calm approach. I can 100% relate to a simple single coding error throwing you off in a relatively complex project such as this one.
Nice man, I just found you yesterday and I'm had a good time watching all your videos. Can't wait to see what else you create!!
Thanks so much! Happy to hear that!
Very juicy project indeed, thanks for sharing!
You inspired me to "re-engineer" my old Volvo 740.. 😃
This is really motivating me to do one of my projects that would use CV for navigation. I was wanting to buy the unitree go1 and do something similar around my town, have a manipulator arm and such. Take it to the store, grab a bag of chips, then leave. Something I have always wanted to do, but havent been able to since moving houses frequently for uni.
Watching this video has really motivated me to push for that, as this is why I am in university right now doing mechatronics, despite the wavering motivation to push on from the flood of assignments and exams.
Well done mate, your video is a real inspiration. If I do get around to my project over summer break, you would of been a major contributor to keeping my head up in my studies and pushing for that goal. Your stuff is legendary and I look forward to seeing what you do next.
New sub here. Great job, you explain things well, I'm sure a lot of people see the value in you sharing your learning experience. Keep it up.
Bravo Blake, this is so huge work. I am amazed with patience... 🎉
Thanks!
Hell ya! Amazing work Austin!!
Love it!
Thanks Jay!
Favourite video so far!! 😁👏🏼 the smartest guy 🤓🥰💗
Smartest AND handsomest. 😊❤️
Wow, you made this really easy to understand. I am surprised by the lack of views. Truly awesome video.
Talk about leveling up. Awesome project. Well done.
Thank you very much!
Awesome project!! 😁
It's a perfect re-creation of tesla, random crashes and all. 😂
The project itself is really impressive and the video is great as well.
hi askill, have you become a flat earther yet?
So cool! This guys a genius 🎉
Thanks Mike!
Broo, well-done. I came from your Instagram and I'm super amazed man, well done bro
Good to see you again. Fantastic video
Thanks a lot!
Fantastic work, Austin! Really well done! 😃
Stay safe there with your family! 🖖😊
Thanks MC! I appreciate it!
Junk yards have many asian made cars with electric steering assist i got one for my prius and i only paid $20 great for that type of application
this is not a good idea
this is a wickedly awesome idea
Amazing job! I hope to put something like this together one day.
Hell yeah, that was amazing!!!!!!!!!
You're a genius!❤
Thanks Foxxyy 😎
@@austiwawa I've been following you since the alternator go kart project and I'm loving to see bigger and better projects like this one
@@foxxyytofficial I remember you commenting on the alternator videos! I really appreciate the support and I am happy that you enjoyed this project!
I hope Elon doesn't sue this guy! What a great project!
Thank you!
he would hire him@@austiwawa
This project has nothing to do with the software used in Tesla's so there's no problem.
@@mrfrog8502 I was being sarcastic
He wouldn't sue him he would hire him
I like that project, and I like the results that you eventually obtained. It's nice to be able to run in a continuously looping path, without straying, until the battery runs out. However, that has limited usefulness. What might be nice, is that when it comes to that intersection in the park, it can be programed to always turn a certain direction, based on the route that you set. In addition, it might be nice to have a prompt mode, where it might stop and ask, if you tell it to do so. Other goals that would be nice, is to be able to save and choose pre programed routes, where you could place it at a point 'A' and it would take a programmed route to point 'B'. However, it might also be nice, if it could recognize parts of the route, so you could place it at point A + offset, and it would still be able to find its way to a destination. Also, it would be nice for it to be able to recognize where it is along a route that it has travelled before, and make it to a destination that might be in a different route, but is still in the larger map. For instance, maybe you could place in a random place on the main loop program, but you want it to turn left & go to a new destination. Maybe, it could start the loop, until it gets to the intersection, then turn left and start a new program - based on a larger program, that's outside the nest.
I know at this point, this is outside the scope of what you want to do, but if you decided to expand the scope of this vehicle's capability, I'd be very interested in your journey. I am starting a similar project, that's smaller in scale. I don't intend to ride along in the vehicle. However, I'd like to eventually be able to program it to autonomously go to certain destinations from a common point, then return. Collision detection would be the next obstacle I'd want to tackle. I appreciate the video, and the work you put in.
Yeah!!!! I saw this video on both LinkedIn and Facebook but I was searching your UA-cam channel and finally got it. Really great job 🤩. Keep going, bro...
My experience with this (specifically potentiometers and Arduino) Your measured potentiometer results may very as the potentiometer ages, and ... If voltage can change or spike - that will change the reading of the potentiometer. Its a frustrating dynamic to debug. I would recommend a digital encoder, stepper motor... Or a verification step to the potentiometer results. Keep up the good work - Love it!!!!
Turned out great! Was a pleasure to do the work for you.
Cool! I'm thinking self driving wheelchairs...
I love your mind and your essence/ Beautiful stuff my friend 🙂
Warm wishes from Perth, Western Australia.
Good job. Yeah, it's easy to overlook something very simple and it throws everything off.
Wow! man such an awesome projects, well done!
Very motivating..You nailed it.
Not to be that guy but couldn’t this be done more efficiently and more effectively if you had just made a control mapping system where you had it record the potentiometer values as you drove the desired path and then repeated those inputs back to you when played? I just don’t think a neural network is really necessary for this application. And also a better more efficient way of doing this is creating a color filter that maps the edges of the path and then have a raspberry pi average the distance between the two path edges at any given time using computer vision. You can take that data in combination with your little potentiometer steering angle decoder and make a simple algorithm to adjust the steering angle to make the car move towards the desired path of averaged numbers.the benefit of this is that it will now work on any path that looks the same as one one you made it for.
Amazing project!
30 dollars per camera 90 dollars in web cams for self driving. Also it would need a bit more work to make it go on other paths by processing all roads in real time. Amazing video, I really want more!!
Amazing work my friend. This is so far beyond me. A++
This is really cool! Only thing is, I can't see why adding 2 additional angles would help the model if you arent going to use those for prediction. I work on computer vision with convolutional neural networks for my job and my gut instinft would tell me that adding the 3 cameras should actually make the model perform worse.. unless you are doing some kind of preprocessing to simulate data as it would appear from the center camera.
Like, if you took this to the extreme scenario, and the cameras on the side were pointing 90 degrees from the center camera and training a model on that to say "if the center camera sees this, you should just continue straight".. I mean your model would just want to drive into the side of the walkway. As a matter of fact, when I am training my models, one of the important things we try to ensure is that training data angles are as close as possible to angles used at prediction.
Other stuff I'd point out is that, the laptop is a nice solution, however you can buy a special purpose SoC like a google coral board or nvidia agx that would use less power and take up significantly less space. I think the google coral TPU is only like 100 bucks. For a model this simple, it would probably be sufficient.. and since your model is so simple (just stay on the road essentially) and your dataset is so small, you could probably scale your images (and model) down really small for really fast predictions. The military actually trained a simple self driving model back in the 90s or something.. I think it wasnt even really a neural network.. just an SVM with something like 100x100 images as input. Maybe even smaller.
This would be a really cool project to have worked on.. although if it was me I probably would take it too far and half something like a route selector, where the current route segment is an input into the model.. or maybe use gps and give a vector delta from the desired location. Assuming this sidewalk is in google maps you might even be able to integrate that
A better approach might also be to make the model attempt to center the vehicle in the lane and collect data by going through phases of collection where you drive too close to the lane in either direction and then another phase where you center it. Then the model could say "im too far right" and you could have logic to correct.. kinda like you do for adjusting the wheel position with a feedback loop
That Tesla kart you made was so cool Lmao
What version of FSD is this?
1e-1 😆
It is awesome.I love these exciting works and projects.I appreciate you
Hey austin, what a great project man. I am doing my masters in AI autonomous Electric cars and i can tell how tough this project is. Making Deep learning algo, electronics & mechanical parts too. Its astonishing that you have done this alone. May i know how much money and time it took? I'd love to have some insights
What did you major in as undergrad?
I love this project..!!! Nice work..
Why did you use 2 arduinos? nano has 3 timers on board which can handle everything you need - adc reading, uart receiving and generate pwm signal.
Hi Austin, Do you have a GitHub repository for your Arduino code?
I have 3 0:24 Ackerman steering cars at 1/10 scale. ROS1 and. ROS2 versions with nvidia sbc’s. I also built a pixhawk controlled gps 1/10 scale with LiDAR avoidance. I can’t crack the nut on path planning outside. Issues gps gets near tree. I can’t effectively fuse camera data for sidewalk segmentation and gps. I like your approach of training on the path. I know enough that the devil is in the details even if you have the code.
Too much engineering ❤❤❤
You are absolutely far better then Elon , ... you did it alone with an extremely limited budget , .. .. however Tesla has at is disposal with an operating budget .. over $100 Billion USA Dollars, ..
I personally know, .. .. I am an Engineer with over 15 years of experience in Artificial Intelligence ...developed machine learning algorithms and cognitive software models ..
Excellent work ... I did enjoyed it ....
Ramiro impecable el auto ese. En Argentina solamente vi dos publicados igual a ese llanta 19 con motor 1.75 turbo. Eran rojos también versión ti. Ya hace unos años.
Te comento que tengo uno 2010 también versión ti y también rojo con menos de 20 000 km. Tiene la caja que para la época no estaba tan mal.
El auto tiene una tenida increíble.
En cuanto al motor si te digo que podria dar mas. Ahora veo que habia un kit de novite y, no lo conocia. Por lo que veo solo aplica al motor tbi.
Buen video. Saludos
Thats an amazing project good work!
Great job. Love the project. !!!! Some RC car hobbiest did Donkey Car using TensorFlow I believe - Good resource.
Austin this is insane. Kudos!
Careful with that epoch thing, self driving accidents never produce after lives.
Put your tech on a riding mower. Make it mow perfect lawns!
Love it. I am working on mine and yours project is such an inspiration 😊
Cool project. A suggestion to improve the steering is telling the car to slow down at higher steering angles.
42 seconds in and I'm subscribed. I can't wait to finish this video!!
Just like that I'm done with the video and that was amazing. It's impressive enough to have the software skills, but doing all of that design, building, machining etc yourself is just insane. Props to you! Thanks for sharing all of this stuff, makes me excited to work on my own projects!
You are definitely over 30 😂😂
excellent work, can you train an a.i to spot and avoid potholes? spot prep and overcome speedhumps? theres your money boet
Thanks! If I avoid specific objects during training, the trained model would probably do the same. I was thinking the next step could be some sort of object recognition.
@@austiwawa eventually a unit that is trained to spot and notify with solution and alternative solution to pothole avoidance. similar to the lane change system. the other thing that greatly interests me is speedhump detection, spot the variance notify user measure the variance and prepare suspension for impact, similar to Mercedes meagntosuspension but with feedback.
then using gyroscopes and flywheels we can make the vehicle do a controlled jump, speed racer style.
its 2023 we should be on fusion power and beyond saturn already.
can we at least make a car smart enough to do cool car things? how do we make the car jump in the first place? mantis shrimp/grasshopper leg mechanism. how do we make it land on its wheels? gyroscopic feedback and spinning flywheels on each axis.
Very cool, im impressed! If i had any of the tools you have id try to recreate this but on a full sized car
Brilliant project 🎉🎉🎉🎉🎉
This too good well done for your achievement. Keep up up with experimentation!!!!!
can you feed the images required for a new path over through google map street view? so as to allow it to self drive anywhere.
Hello Sir,
I recently watched your self-driving car project video, and I was absolutely amazed by it! My name is Clancy, and I'm a college student from India. My team and I are eager to learn more about this project, and we would be incredibly grateful if you could guide us in building something similar. Your expertise and insights would mean a lot to us as we embark on this exciting journey. We are truly passionate about developing our skills and would love to learn from someone as experienced as you.
Looking forward to your response!
Dammm thats so cool, now make it drive everywhere 🤣
This was freaking amazing. Thank you!
Fantastic work!
Thanks!
gosh, very cool! ❤
Thank you!
Are you using lane following? Curious as your path only has borders but no lanes. Would love a followup deeper dive into the OpenCv code
6:57 Why would you admit that a location you revealed on the internet is extremely _close_ to where you live
I'd love to see you implement openpilot on the kart.
Brilliant! I'm sure this only captures 1% of how hard this project must have been😅
May I ask, do you have an engineering/masters degree?
This is like 4 different masters
All the successful tries were with mostly shadows (the golden hour). You need more training in harsh lighting conditions to be able to handle the shadows.
All hail the algorythum 👍👍👍
Great project!
Thanks!
will you post your model anywhere? like huggingface? would love to check it out!
💪👏👏👏👏👏
I loved this project, well done
What an absolutely fun and challenging project. What models did you train for this?
Do you lose energy efficiency through all the micro-adjustments the software makes?
So sick, inspires me to try something similar of my own
Did you get into 3d mapping or was it based on 2d segmentation?
Did you reconfigure the neural network for a single camera input or was it expecting 3 different cameras but receiving the same came at different times?
Use wider angle cameras, you'll get a better more reliable result.
Also look into RTK navigation and integrate each sessions path into an averaged path as that's what all these self driving companies are doing.
The really special sauce is in real time navigation using computer vision fusion of RGB, Depth, Laser, Radar, and other sensors.
This is awesome! Have you considered adding pathfinding? It’d be super cool to be able to mark a point on a map and have the cart drive you there!
hi realdotty, have you become a flat earther yet?
Awesome project!
Thank you!
My first thought when it turned into the grass was that you flipped a sign 😋
This was amazing work ❤ good job
Excellent work Austin!
Are you rectifying your camera images?
😍Viewing angle transition @ 11:10 😍
Which python library did you use to interface/connect the Arduino to your main python program?
You should have also implemented a simple PID controller first. Usually that helps in debugging most part of h/w level code. Still really good project. Getting h/w results is always hard 😃.
I wonder if having one of those 360° selfie cameras would be better than multiple? Would it be simpler for the computer?
Perfect 🚗💻📷
Great video.
Was wondering if you could strip the ecu with the ai chip from a wrecked or totaled tesla and just install it to your mini?
It would be easier, right?
Thanks for the vid
lol, this is cool & all but earlier versions of Tesla’s FSD were/are open source & you could’ve just modify/extend the real deal. Definitely learned more this way though.
Instead of a switch to disengage, maybe use a force sensor to calculate how many nm should be applied to the wheel before it lets you take control. In the same manner, you can have the motor turn with to keep the force within a range.
Other thought is to use progressive memory mapping with predictive path to guestimate the corner angle and adjust the maximum speed before loss of traction.