Excellent. I have been using secondhand store toys for the platforms. The cheap toys are better than my programming skills but at low cost I'm making ground. A NewBright Grave Digger, now controlled by Arduino proportional RC. You have moved the Bar, I'm on it.
Hi! Can you please tell me which version of tensorflow you used for this project? :) And what actions do I need to take after the nvidia cuda installation we downloaded for image processing? Can you help if you have resources that you also benefit from? amazing project btw :)
@@MarcelEgle wow Thank you for replying, appreciate it !!! one more question, did you use Full Tensorflow version in Jetson Nano, or did you use TensorRT version for Inference ? I have an RC car laying and jetson nano around, i want to convert it into self driving car soon.
@@dubber889 Actually i used Tensorflow Lite (although im not sure how much of the GPU Tensorflow Lite used). Later on i ran the same model on a Jetson Xavier NX with full Tensorflow and more parameters and it worked even better :D
Actual "avoidance" would be hard because then this system needs a proper understanding of where it is and its surroundings for planning how to get around obstacles (probably some kind of SLAM system). Maybe for smaller obstacles the AI could just be train to drive around them, but the road needs to be always visible and as soon as the object is out of frame the AI has no idea that it is there. Object detection, so simply breaking and waiting until the road is clear again could be implemented relatively easy.
@@imanebertal9985 This is a hard question to answer, if your starting from zero (never programmed before) then i would not start with this kind of a project (At least not with building everything yourself, there are ready made so called "Donkey Car" Kits available i think). To be completely honest i would start with the thing you are excited about the most because you will need a lot of motivation to keep going. This is how i started out, i picked topics that i was interested in and almost always it helped me with a future project. Hope this helps!
@@MarcelEgle it will thank you so much, but you didn't ask me where should I start I really need your help about that, I promise I'm not gonna bother you just few qsts a day 😔😊
Hey man nice work, I am very interested in your rc controller you made using the arduino nano, is there any documentation about that available, or published code and schematics?
Initially i wasn't sure if this is helpful to anyone, but because you asked i uploaded everything to a separate repository: github.com/Check2016/aicar-rc-controller
First of all, Awesome video! You definitely put a lot of time and effort into the project and the video. But why did you use a GPS? Can't you just capture the PWM signal going to the servo and speed controller and use that to train your model and get the speed of the car for inference? That way your car will still drive with the SSD taped to the side :P.
Good question, i guess you could use some sensors on the wheels to get the rpm. Maybe with a better speed controller that tracks the rpm of the motor (i used all the default components of the rc car)... But for me this was the cheapest and fastest way to get a speed of the car. I'll definitely keep it in mind for next time.
@@MarcelEgle I meant PWM, not RPM. You could just read those signals coming from the RC receiver without any additional sensors. I actually made a car similar to yours but I used the PWM signal from the RC receiver to determine the steering angle when collecting data for training. I used the same Raspberry Pi Zero (w) to read the PWM as I used for all the other stuff. But I'd say my project is a little more "scuffed" than yours because I have to stream the frames from my RPI to my laptop so my laptop can do the inference and send the result back. The poor RPI is just to weak to do the inference. This setup works but the connection is pretty bad so it sometimes just drives into walls... The Nvidia Jetson, which you used, although expensive, seems like the best tool for the job. If you're interested in my project, I have a video of it on my profile.
@@Theking-uy3wu Ahh, ok now i know what you meant. Yes i used the PWM signals from the receivers to get the steering value and throttle/brake (my input signals from the receiver) but additionally i also collected the current GPS speed for the neural net to make better predictions. If you only have a camera image in front of a corner you dont know if you need to brake or maybe even speed up if your standing still... Cool, i'll check out your video :D
Unfortunately no, because you need quite a bit of computational power for this AI to run in realtime ^^ But with a stripped down version it may be possible to run it on a Raspberry Pi with Tensorflow Lite
Yes i have, it already kind of learned breaking before a corner. I also did a data record session where i just recorded myself breaking in front of corners, but it probably needed even more data for this single network.
Wow amazing. I have done using simple computer vision techniques but I also want to do same. But I'm not able to make dataset like that. Can you give me its dataset? It will be a lot for me. Thank you so much.
@@ansjaved3484 Probably because an SSH connection doesnt have a display output set... i was indeed using SSH but didnt use any GUI so i didnt need a display.
@@MarcelEgle Dear I need your help in sensors data fusion. As I want to do data fusion of lane line detection algorithm and obstacle avoidance algorithm. I'm using IMX160 degree FOV camera and RP Lidar A2 M8 2d 360 degree and I'm using Pyrplidar library to control Lidar. I want to avoid obstacle when it comes in way of my car following lane lines. I have 2 models 1 is lane line detection giving me a steering wheel angle, another is obstacle and object detection in that frame and it is only detecting that. Now I want to avoid obstacles coming in front of my car. But I don't want my car to unfollow lines when obstacles comes in front of my car. Please help me in this regard. Thank you so much.
I decided to choose the Jetson Nano because i wanted to try it out mostly. I thought it could have a lot of potential with an actual Nvidia GPU... For just using TFLite a coral dev board might have been the better choice (I think it uses the TPUs) since for me the regular Tensorflow was too slow. I later upgraded to a Jetson Xavier NX and together with regular Tensorflow its unbeatable in performance for such a small device (thats lowpower aswell)
The Xavier NX needs 9-19V as input (gets converted to 5V board voltage) and draws up to 5A (usually quite a bit less, this is just the maximum rating: 15W at 5V so 3A under full load + max. 4x 500mA USB Power). I used a step-down power supply board and connected it to the balancer cable of the lipo (to ground and the "last" cell, so basically just the lipo output). For me this setup was pretty convenient because i didnt need to solder anything to the battery or the car and i basically already had a free plug. For my case the wires had a big enough diameter to support the load of the Jetson. If you try the same then always check that your wires can support the maximum Amps you can expect.
@@MarcelEgle Thanks for the info. So, effectively you reduced the voltage to 5v using the step down buck converter and fed it to the micro usb input of jetson nx. I was planning to upgrade from jetson nano to xavier nx. But was worried on the power supply part as its power hungry for deep learning networks.
Thanks so much :D Do you mean the car itself or the aluminum parts? The car base came prebuilt and everything out of aluminum was added afterwards. I measured the car, especially the holes for mounting the default canopy and then planned the aluminum stuff in Fusion 360. I cut the raw pieces with a jigsaw, grinded off the rough edges, drilled the holes and finally assembled the parts using screws. For the camera mount i used a small aluminium plate suspended from anti vibration rubber balls but im not sure exactly how much they helped. But im sure they did make the vibrations worse ^^ Hope that helps, if i misunderstood your question or you want to know something else just ask :D
@@MarcelEgle Thanks for reverting. Yeah, I meant the aluminum fabricated parts. It looks very neat and well done. Weird as it may sound, the biggest challenge for me is to source these customized frames and chassis for my needs. Perhaps when you have time, make a small video on fabricating these mounts, chassis, and frames. It will be very helpful for someone like me who's beginning to build stuff and not just assemble an off-the-shelf kit....
I added a github repository: github.com/Check2016/aicar-lanefollowing. I didnt have a lot of time this weekend but at least most of the important scripts should be there. I'll add a readme and more stuff over the next few days/weeks.
I’m just rewatching this every day, the quality of this video is one of the best I’ve seen
Thanks a lot :D I worked really hard on it
You definitely earned a sub bro! These 3D Graphics are better than youtubers with 100k or more do.
DAMN! This video quality is amazing. If you continue with this content I'm sure you will make it big in YT.
yo this video is awesome, the augmented reality and editing is fricken awesome! Just subbed!
Wow this is so cool, i love animation and the project!, can't wait for v2
Excellent. I have been using secondhand store toys for the platforms. The cheap toys are better than my programming skills but at low cost I'm making ground. A NewBright Grave Digger, now controlled by Arduino proportional RC. You have moved the Bar, I'm on it.
Man this is epic. The video was beyond what I expected
Nice explanation and editing!
Hi! Can you please tell me which version of tensorflow you used for this project? :) And what actions do I need to take after the nvidia cuda installation we downloaded for image processing? Can you help if you have resources that you also benefit from? amazing project btw :)
This is very impressive, wow
3:21 what Interface dan visualization did you use ? is it ROS ?
Thats the GPS visualisation from u-blox, the company that made the GPS module. The software is called u-center.
@@MarcelEgle wow Thank you for replying, appreciate it !!! one more question, did you use Full Tensorflow version in Jetson Nano, or did you use TensorRT version for Inference ? I have an RC car laying and jetson nano around, i want to convert it into self driving car soon.
@@dubber889 Actually i used Tensorflow Lite (although im not sure how much of the GPU Tensorflow Lite used). Later on i ran the same model on a Jetson Xavier NX with full Tensorflow and more parameters and it worked even better :D
Awesome! I want to do this with a power wheels vehicle.
1:33 I- I- .......this is good.
Arduino, Raspberrry PI, or a PIC???
I used a Nvidia Jetson Nano and for the RC controller board an Arduino Nano.
Waiting for next episode
I'm currently working a full time job, so free time is pretty sparse right now. But i definitely want to do more projects.
Please sir make full video please
I have a question in which programming language or with wich tool are you doing this? I want to do such things too
I used Tensorflow for machine learning together with Python
Can u share the code with us please because i like to work on this project tooo
Whoa very nice! Do you think the 2gb Jetson Nano is good enough? Which one did you use?
I think i used the 4gb version... For TFLite it should definitely be enough, but for regular Tensorflow im not sure.
That’s amazing! Keep going!
To add object avoidance, what would you do?
Actual "avoidance" would be hard because then this system needs a proper understanding of where it is and its surroundings for planning how to get around obstacles (probably some kind of SLAM system). Maybe for smaller obstacles the AI could just be train to drive around them, but the road needs to be always visible and as soon as the object is out of frame the AI has no idea that it is there. Object detection, so simply breaking and waiting until the road is clear again could be implemented relatively easy.
best video with lots of details.
Hello sir Please make full making video
Where can I get the dataset you used to train the model?
I didn't upload it since it was too large and the original sadly was lost on a failed raid array.
I really need your help on more informations about this, it's has a relation on my project
What question do you have? Happy to help
@@MarcelEgle can you give me your social media contact for a close contact 🙏 appreciate it
But for now what should I learn before starting to build this car (arduino, raspberry pi, python...)
I need you to guide me
@@imanebertal9985 This is a hard question to answer, if your starting from zero (never programmed before) then i would not start with this kind of a project (At least not with building everything yourself, there are ready made so called "Donkey Car" Kits available i think). To be completely honest i would start with the thing you are excited about the most because you will need a lot of motivation to keep going. This is how i started out, i picked topics that i was interested in and almost always it helped me with a future project. Hope this helps!
@@MarcelEgle it will thank you so much, but you didn't ask me where should I start I really need your help about that, I promise I'm not gonna bother you just few qsts a day 😔😊
Hey man nice work, I am very interested in your rc controller you made using the arduino nano, is there any documentation about that available, or published code and schematics?
Initially i wasn't sure if this is helpful to anyone, but because you asked i uploaded everything to a separate repository: github.com/Check2016/aicar-rc-controller
I always wanted a rc car it drives it self
cool project. can i try to make it..
First of all, Awesome video! You definitely put a lot of time and effort into the project and the video.
But why did you use a GPS? Can't you just capture the PWM signal going to the servo and speed controller and use that to train your model and get the speed of the car for inference? That way your car will still drive with the SSD taped to the side :P.
Good question, i guess you could use some sensors on the wheels to get the rpm. Maybe with a better speed controller that tracks the rpm of the motor (i used all the default components of the rc car)... But for me this was the cheapest and fastest way to get a speed of the car. I'll definitely keep it in mind for next time.
@@MarcelEgle I meant PWM, not RPM. You could just read those signals coming from the RC receiver without any additional sensors. I actually made a car similar to yours but I used the PWM signal from the RC receiver to determine the steering angle when collecting data for training. I used the same Raspberry Pi Zero (w) to read the PWM as I used for all the other stuff.
But I'd say my project is a little more "scuffed" than yours because I have to stream the frames from my RPI to my laptop so my laptop can do the inference and send the result back. The poor RPI is just to weak to do the inference. This setup works but the connection is pretty bad so it sometimes just drives into walls... The Nvidia Jetson, which you used, although expensive, seems like the best tool for the job. If you're interested in my project, I have a video of it on my profile.
@@Theking-uy3wu Ahh, ok now i know what you meant. Yes i used the PWM signals from the receivers to get the steering value and throttle/brake (my input signals from the receiver) but additionally i also collected the current GPS speed for the neural net to make better predictions. If you only have a camera image in front of a corner you dont know if you need to brake or maybe even speed up if your standing still... Cool, i'll check out your video :D
Awesome dudeeeeeeeeeeeeeeeeeeeeeeeee just amazing
When this becomes a regular car they should add a function where if you play high way to hell it will try to kill you in every way possible
Awesome ! Please can i use raspberry pi instead of jetson nano ?
Unfortunately no, because you need quite a bit of computational power for this AI to run in realtime ^^ But with a stripped down version it may be possible to run it on a Raspberry Pi with Tensorflow Lite
Just awesome
What type of wheel u use in your project bro ?
What do you mean exactly?
@@MarcelEgle it mean wheel car bro
Wow this is really great. Could see your hard work in your handwork.Great video
Please what software did you use in making your video
Thanks :D I used Adobe After Effects to create the clips and Adobe Premiere to edit/composite them. The 3D Stuff was done in Blender.
great job
hi. in which software did you make the animation ?
The 3d effects were done in Blender, then combined with 2d effects and animations in Adobe After Effects and finally edited in Premiere Pro.
have you consider getting AI to slow down the car when it need to turn sharply?
Yes i have, it already kind of learned breaking before a corner. I also did a data record session where i just recorded myself breaking in front of corners, but it probably needed even more data for this single network.
@@MarcelEgle cool!! I am looking forward to an update video when you have time?
@@coolthought8456 I really want to make a new project/video. At the moment i dont have a lot of spare time, but i'm trying to make time for it.
Your editing skills are fire🔥🔥🔥🔥
Wow amazing. I have done using simple computer vision techniques but I also want to do same. But I'm not able to make dataset like that. Can you give me its dataset? It will be a lot for me. Thank you so much.
I didnt host the dataset anywhere i'm sorry, but i will keep it in mind for future projects.
@@MarcelEgle I really need it. 😔
@@MarcelEgle how you gave command to Jetson Nano to run? I'm using SSH connection. But it giving me error of not displaying Jetson Nano camera.
@@ansjaved3484 Probably because an SSH connection doesnt have a display output set... i was indeed using SSH but didnt use any GUI so i didnt need a display.
@@MarcelEgle Dear I need your help in sensors data fusion. As I want to do data fusion of lane line detection algorithm and obstacle avoidance algorithm. I'm using IMX160 degree FOV camera and RP Lidar A2 M8 2d 360 degree and I'm using Pyrplidar library to control Lidar. I want to avoid obstacle when it comes in way of my car following lane lines. I have 2 models 1 is lane line detection giving me a steering wheel angle, another is obstacle and object detection in that frame and it is only detecting that. Now I want to avoid obstacles coming in front of my car. But I don't want my car to unfollow lines when obstacles comes in front of my car.
Please help me in this regard.
Thank you so much.
Sweet, man! This is an awesome video. How long did the total process take, if I may ask?
Thanks :D I think it was 3 Months
Why do you use jetson nano?
I decided to choose the Jetson Nano because i wanted to try it out mostly. I thought it could have a lot of potential with an actual Nvidia GPU... For just using TFLite a coral dev board might have been the better choice (I think it uses the TPUs) since for me the regular Tensorflow was too slow. I later upgraded to a Jetson Xavier NX and together with regular Tensorflow its unbeatable in performance for such a small device (thats lowpower aswell)
@@MarcelEgle thanks for the very good info
@@MarcelEgle Great video! How much voltage does the jetson xavier nx need? And how are you powering it? Thank you!
The Xavier NX needs 9-19V as input (gets converted to 5V board voltage) and draws up to 5A (usually quite a bit less, this is just the maximum rating: 15W at 5V so 3A under full load + max. 4x 500mA USB Power). I used a step-down power supply board and connected it to the balancer cable of the lipo (to ground and the "last" cell, so basically just the lipo output). For me this setup was pretty convenient because i didnt need to solder anything to the battery or the car and i basically already had a free plug. For my case the wires had a big enough diameter to support the load of the Jetson. If you try the same then always check that your wires can support the maximum Amps you can expect.
@@MarcelEgle Thanks for the info. So, effectively you reduced the voltage to 5v using the step down buck converter and fed it to the micro usb input of jetson nx. I was planning to upgrade from jetson nano to xavier nx. But was worried on the power supply part as its power hungry for deep learning networks.
Its wonderful 🥰
Sauguad!
Awesome project! Amazing video editing skills!! One question: How do you make/fabricate your chassis?
Thanks so much :D Do you mean the car itself or the aluminum parts? The car base came prebuilt and everything out of aluminum was added afterwards. I measured the car, especially the holes for mounting the default canopy and then planned the aluminum stuff in Fusion 360. I cut the raw pieces with a jigsaw, grinded off the rough edges, drilled the holes and finally assembled the parts using screws. For the camera mount i used a small aluminium plate suspended from anti vibration rubber balls but im not sure exactly how much they helped. But im sure they did make the vibrations worse ^^ Hope that helps, if i misunderstood your question or you want to know something else just ask :D
@@MarcelEgle Thanks for reverting. Yeah, I meant the aluminum fabricated parts. It looks very neat and well done. Weird as it may sound, the biggest challenge for me is to source these customized frames and chassis for my needs. Perhaps when you have time, make a small video on fabricating these mounts, chassis, and frames. It will be very helpful for someone like me who's beginning to build stuff and not just assemble an off-the-shelf kit....
can you post the code on github?
I'll put together a github repo on the weekend :D
I added a github repository: github.com/Check2016/aicar-lanefollowing. I didnt have a lot of time this weekend but at least most of the important scripts should be there. I'll add a readme and more stuff over the next few days/weeks.
Awesome
Thanks :D
「内容を明確にする必要があります」、
You should change the thumbnail,it isn't catchy enough for the video tbh