Jetbot Neural Network Based Collision Avoidance

Поділитися
Вставка
  • Опубліковано 20 лип 2024
  • In this video, I demo the Collision Avoidance example included in the Jetbot's Jupyter notebooks. The Jetbot is an opensource robot based on NVIDIA's Jetson Nano that can be built for ~250USD. See below for links to parts, code, and documentation.
    Link to dataset and trained model used in this video: drive.google.com/drive/folder...
    Paid affiliate links:
    Jetson Nano ► amzn.to/31DV20K
    DC Motors ► amzn.to/2Ri28mY
    Motor Driver ► amzn.to/2FbcLmE
    OLED Display ► amzn.to/31xYIBk
    1" Delrin Plastic Ball ► amzn.to/2KRu2oJ
    WiFi ► amzn.to/2RmorIb
    Right Angle USB Micro-B ► amzn.to/2Rggl3S
    10 Ah LiPo Battery ► amzn.to/2WJbrNX
    Jetbot Github Repo ► github.com/NVIDIA-AI-IOT/jetbot
    Jetbot Bill of Materials ► github.com/NVIDIA-AI-IOT/jetb...
    Buy me a coffee! ► www.buymeacoffee.com/BaCxh6ues
    My RedBubble store ► www.redbubble.com/people/Zack...
  • Наука та технологія

КОМЕНТАРІ • 174

  • @quinncasey120
    @quinncasey120 5 років тому +35

    "I can use this as data" hilarious!

  • @markmilliren1453
    @markmilliren1453 5 років тому +2

    Awesome stuff Zack. This one I understood a little more than the subscribe counter. Love the video structure and creativity!

  • @user-lz2kz5kc4t
    @user-lz2kz5kc4t 4 роки тому +1

    I was going to say "ha, that's an overtraining right there", but it turns out working pretty good!! Very nice work!

  • @Bippy55
    @Bippy55 4 роки тому +3

    You've highlighted the system very well. It's encouraging especially to an older engineer that just used to program in HEX. Thanks very much!

    • @ZacksLab
      @ZacksLab  4 роки тому

      Dave, I'm really glad that it was able to help you, I hope you enjoy the adventure! :)

  • @MakeCode
    @MakeCode 5 років тому +4

    I became a big fan to this channel! It is what I wanted to do.

  • @John-nr1ez
    @John-nr1ez 5 років тому +1

    This video is so well done, awesome job! I like the JetBot color scheme and Jupyter theme :)

    • @ZacksLab
      @ZacksLab  5 років тому

      Thanks so much! I ordered 3d printing filament just to get those colors for the chassis :)

  • @flaviuspopan8024
    @flaviuspopan8024 5 років тому +1

    So damn glad you started making videos, they’re really entertaining and inspiring.

    • @ZacksLab
      @ZacksLab  5 років тому

      Thank you Flavius, that means a lot to me!!

  • @diggleboy
    @diggleboy 4 роки тому +1

    Great video Zach!
    I'm looking to get into the NVidia Jetson Nano for signal processing. Nice to see how easily it is to use pytorch to train the classifier, download it to the Jetson board and run it. This example you gave is really cool. Liked. Subbed. Smashed the bell.

  • @toranarod
    @toranarod 5 років тому +2

    Thank you. Best demo i have seen with information that really help me move forward.

  • @lilasarkany3381
    @lilasarkany3381 5 років тому +16

    It's interesting how good the quiality of your videos, how good you explain stuff but you doesn't even reached 1K
    I think you deserve more.

    • @ZacksLab
      @ZacksLab  5 років тому +1

      thank you Lila, I'll keep the videos coming regardless, I hope to hit 1k soon!

  • @steveconnor8341
    @steveconnor8341 5 років тому +1

    I love the incoming light bulb threats. Great video

    • @ZacksLab
      @ZacksLab  5 років тому

      haha, thank you! :D

  • @Blobcraft13
    @Blobcraft13 5 років тому +2

    I'm loving these video

  • @kamalkumarmukiri4267
    @kamalkumarmukiri4267 5 років тому

    Wonderful video.... Thanks for sharing. I am eagerly waiting for my nano kit ordered from amazon :)

  • @kestergascoyne6924
    @kestergascoyne6924 5 років тому +1

    Thank you very much. I might build this as my first robot!

    • @ZacksLab
      @ZacksLab  5 років тому

      You’re welcome, let me know how it goes!

  • @TheRealMoviePastor
    @TheRealMoviePastor 4 роки тому

    Awesome. Love the breakdancing. LOL

  • @gusbakker
    @gusbakker 4 роки тому +19

    "breakdancing? ups.. I'll consider this a feature"

    • @RAP4EVERMRC96
      @RAP4EVERMRC96 4 роки тому

      Went to the comments to see if somebody already commented :D

  • @AngryRamboShow
    @AngryRamboShow 5 років тому

    Cool channel Zack. I have a 1080 Ti that will be used for training data, but I'm still waiting on the Nano to be delivered :S

  • @nicholasbeers5913
    @nicholasbeers5913 5 років тому +3

    I'm loving the shirt man, best game ever!

    • @ZacksLab
      @ZacksLab  5 років тому

      omg I know, UO ruined every other game for me, nothing will ever compare. what server did you play on? I was on Atlantic from 2002-2004... I was on yamato before that because I had no idea what servers were when I was first starting and I just randomly chose one.

  • @hfe1833
    @hfe1833 4 роки тому +1

    Of all the jet nano video I saw, all I can say this is like a practical demonstration or almost a real world scenario, congrtas bro, by the way hope you can reading numbers like speed limit simulating car limit

    • @ZacksLab
      @ZacksLab  4 роки тому +1

      Thank you! Yes adding road sign detection is actually what I want to work on next, I was thinking about putting a nano and camera on my car dashboard to collect data and start working on sign and streetlight detection and interpretation.

    • @hfe1833
      @hfe1833 4 роки тому

      @@ZacksLab this will be great and awesome, you deserve one subscriber, in 3..2..1,bell button click done

  • @FunkMasterF
    @FunkMasterF 4 роки тому

    Great video. Thank you. +1 for breakdancing.

  • @naimuddinshimul2770
    @naimuddinshimul2770 5 років тому

    Great video !

  • @SandeepGhoshaction
    @SandeepGhoshaction 5 років тому +2

    Awesome project ! Was looking for something like this only.
    Can the same concept be applied using a Raspberry Pi 3 b+ ?
    Please keep posting related stuff because youtube has got tons of electronics videos as well as tons of DL/NN videos ... But electronics along with AI, its really not there much.
    Subscribed !!

  • @GWebcob
    @GWebcob 3 роки тому

    Underrated video

  • @ThomasGodart
    @ThomasGodart 5 років тому

    Great video! Thanks for sharing it 👍

    • @ZacksLab
      @ZacksLab  5 років тому +1

      thank you! and you're welcome :)

  • @jackflynn3097
    @jackflynn3097 5 років тому +2

    Awesome, I'm working on my turtlebot project, using ROS's gazebo to do simulation and A3C(a Reinforcement Learning algorithm) to train the BOT. It can save tons of time by avoiding gathering data and label them.

    • @ZacksLab
      @ZacksLab  5 років тому

      jack flynn, interesting. How is the simulation environment generated? It would be cool to see a side by side comparison of both methods. I would imagine that the ideal training set is a combination of both sims and real data.

    • @jackflynn3097
      @jackflynn3097 5 років тому

      @@ZacksLab well the problem is not about the simulation, its about Deep RL algorithms. As DeepMind's research from DQN to DDPG and A3C(Deep RL methods) take raw pixels as input, and learn how to avoid obstacles and even navigate through maze.

    • @jackflynn3097
      @jackflynn3097 5 років тому

      @@ZacksLab this is a Video by DeepMind, it's a result on playing TORCS using A3C methods:
      ua-cam.com/video/0xo1Ldx3L5Q/v-deo.html

    • @ZacksLab
      @ZacksLab  5 років тому

      jack flynn So does the AI in the video you shared always stay inside the simulated environment? What happens when you put AI that was trained strictly on simulated data into a physical device in the real world and it encounters scenery, lighting conditions, objects, and scenarios that the simulator wasn’t able to provide?
      The issue with training on data generated in simulators is that the real world throws scenarios at the AI that the simulations just can’t account for. Are you saying that the A3C method solves this issue?

    • @jackflynn3097
      @jackflynn3097 5 років тому

      @@ZacksLab okay, I got it. Actually RL's main idea is learning by interaction. It let the agent(jetson bot in this case) try moves and gain rewards. If it hit an obstacle then this episode is finished and gain negative reward. Agent's goal is to maximize total reward. So after episodes of episodes of learning, it adjusts parameters in the agents NN.
      This can be done in a simulated environment or done in real world. The agent is a policy network which in A3C methods is also called an Actor. It's a policy network, when you input a state(a image from camera) it output an action(left right forward or stop). A3C is a RL methods, RL is different from supervised learning in that you don't need to give your learner labeled data.
      Back to the simulated env, if train the bot is difficult in real world. Then it can be done in simulated env. Which ROS(robot operating system based on Ubuntu) provides tools to do that. When your agent/actor is well performed in the simulation, putting it into real world is easy(ROS makes sure of it).
      I did some experiments, after the robot is trained in the simulated env, it works on every kinds of ground surface. Maybe because the weights on pixels related to the grounds are small.(Still working on the prove that).

  • @yezhang2947
    @yezhang2947 4 роки тому +1

    Cool! Thanks for sharing!

    • @ZacksLab
      @ZacksLab  4 роки тому

      You’re welcome, glad you liked it!

  • @victorclaros8967
    @victorclaros8967 5 років тому +1

    Amazing job !!!!

  • @saidbouftane5253
    @saidbouftane5253 5 років тому

    great video

  • @makersgeneration3739
    @makersgeneration3739 5 років тому +1

    Thanks Zack! 😎

  • @tiamariejohnson6898
    @tiamariejohnson6898 4 роки тому

    wow i want to invest in this product you engineered from Nividia, that is complex code and you did amazing

    • @ZacksLab
      @ZacksLab  4 роки тому

      Ariel, thank you! The jetbot is an open source project built by the Nvidia community, I didn't personally design the Jetbot or the code used in this video, it's all available on the Jetbot github for anyone to use/experiment with!

  • @dgb5820
    @dgb5820 4 роки тому

    Really appreciate this video

    • @ZacksLab
      @ZacksLab  4 роки тому

      thanks! appreciate the comment :)

  • @JuanPerez-jg1qk
    @JuanPerez-jg1qk 3 роки тому

    you can train to search items as well..like missing keys ..by showing the keys ..then hide it ..let the bot search mode ..it will notfity to you by phone..or alarm device

  • @woolfel
    @woolfel 5 років тому

    I just bought Jetson Nano too. Have you tried running faster rcnn on the nano? I'm still waiting until I get a power brick and battery.

  • @isbestlizard
    @isbestlizard 4 роки тому

    YES this is awesome! i'm going to stick my nano on a quad drone and make it learn how to FLY ITSELF :D

    • @isbestlizard
      @isbestlizard 3 роки тому

      @no one expected the spanish inquisition I'm doing an msc in ai which has a project, picked reinforcement learning going to get it to learn in a simulator then transfer it to hardware!

  • @tornShoes010
    @tornShoes010 5 років тому +2

    I would like to know more about how you configured a custom theme for the jupyter notebook.

    • @ZacksLab
      @ZacksLab  5 років тому

      I believe you can only set the theme with jupyter lab, not jupyter notbeook. In jupyter lab, go to settings -> jupyterlab theme

  • @airinggemi
    @airinggemi 4 роки тому

    Great video. I don't know about much about AI but your video make me excited. Unfortunately, I cant buy motor driver and plastic ball in my country, what should i use to replace it? And after i craft the car, how i use your data? Should i buy the Jetbot's Notebook?

  • @ryanc5195
    @ryanc5195 5 років тому +6

    Hi, it is great video. I just got nano and wondering how to setup training to save my own data. Will you have step by step video? Thanks

    • @ZacksLab
      @ZacksLab  5 років тому +1

      Hi Ryan, thank you! If you follow the collision avoidance example that is on the jetbot's github repo under NVIDIA-AI-IOT/jetbot/notebooks/collision_avoidance you will find a jupyter notebook called data_collection.ipynb. Launch this notebook on the jetbot and run through the code, if your jetbot hardware is set up correctly everything should go smoothly.
      I can definitely do a step by step video on this but it will take me a bit to get it posted!

    • @Osmanity
      @Osmanity 5 років тому +1

      @@ZacksLab thank you if it takes a while it is ok but it would be very very helpful with step by step thanks again dude

    • @miguelangelpicoleal3234
      @miguelangelpicoleal3234 3 роки тому

      ​@@ZacksLab I would also like to see that in one of your upcoming videos, Hopefully It's still in your plans.

  • @angelleal3005
    @angelleal3005 3 роки тому

    This is amazing ! I wonder Are you moving the Jetbot back and forth while it avoids obstacles, or Do you command it to go to a desired destination ( obviously avoiding obstacles by itself in the process )?

    • @ZacksLab
      @ZacksLab  3 роки тому

      Thanks! no there is no input from me, it just attempts to navigate any environment you put it in while avoiding any collisions. I could modify the program to give it more of a “purpose” rather than just moving around and avoiding things.

    • @angelleal3005
      @angelleal3005 3 роки тому

      @@ZacksLab Oh ok I see, Great work man ! new sub, would absolutely love to see more stuff of this sort. Thinking of doing one school project on this matter.

  • @grahamhilton2397
    @grahamhilton2397 4 роки тому

    Great Video, I am currently doing a similar project using the waveshare JetRacer. This is a simple question but how do you save the images for training? Also I am doing supervised learning first as I have an oval track to use !

    • @ZacksLab
      @ZacksLab  4 роки тому

      hey graham, thanks! I obtained/saved the images using this jupyter notebook: github.com/NVIDIA-AI-IOT/jetbot/blob/master/notebooks/collision_avoidance/data_collection.ipynb You could use this as a starting point for data gathering and tagging for your oval track.

  • @WildEngineering
    @WildEngineering 5 років тому

    Nice work man! RIP beer.

    • @ZacksLab
      @ZacksLab  5 років тому +1

      TheWildJarvi thanks! Haha, yeah. Looking back at the video I came close to knocking it over a few other times :P

  • @fayobam_mech_tronics
    @fayobam_mech_tronics 4 роки тому +2

    I love this, do you think a mobile 2070mq is a good GPU to learn deep learning and other artificial intelligence things

    • @ZacksLab
      @ZacksLab  4 роки тому +1

      Thank you Ayobami! Yes, the 2070mq is certainly powerful enough to get started with machine learning and training neural networks! Especially for the Jetbot or other implementations for the Jetson Nano.

  • @gusbakker
    @gusbakker 4 роки тому +2

    Would be great to see a Drone project made with Jetson Nano

    • @ZacksLab
      @ZacksLab  4 роки тому +1

      I'd love to do something with the Nano and a drone platform, it's definitely on my project list. I was working for a startup using the Jetson TX2 (the big brother of the Nano) for vision based collision avoidance for industrial drones... I wrote a medium blog post about the hardware development for it if you're interested! medium.com/iris-automation/the-journey-to-casia-part-one-faea27491f02

  • @stevejeske2266
    @stevejeske2266 4 роки тому

    Nice video Zack! I genuinely fear the day when I see driverless cars everywhere, but I think AI is fascinating. I hope you make more videos on this subject.
    BTW, I know Tesla is all-in with electric cars, but I am not convinced that our existing electrical infrastructure can safely and efficiently supply such an increase of demand if electric cars become popular. The brownouts in CA associated with the PG+E grid and the wild fires is just one example. Ohio just passed a law to subsidize Perry and Davis-Besse Nuclear Power plants $150 Million dollars per year for 6 years because they cannot compete with gas-fired turbines (gas is abundant and cheap). These nuclear power plants are 40 years old and should be retired. And even though there is new technology for higher efficiency nuclear power, I know of only one nuke plant (in SC) that has been significantly upgraded in the past 10 years due to environmental concerns. I am not an advocate of nukes, but I seriously question if this nations's electrical grid can handle such an increased demand. So, please convey my message to Elon the next time you see him! Take care. sj

    • @ZacksLab
      @ZacksLab  4 роки тому

      Thanks Steve! AI is a tricky subject that carries a lot of social and ethical concerns. It also has a lot of promising benefits that are currently in use and improving quality of life for many people today. But it is a double edged sword.
      I do want to do more projects with AI and hardware. I’ve been getting crushed at work so my time for UA-cam has diminished... but I look forward to jumping back into it when I free up!

  • @knarftrakiul3881
    @knarftrakiul3881 4 роки тому

    Wow... I bet this could be used to spot potholes on road.

  • @mianwaqasarshad9611
    @mianwaqasarshad9611 2 роки тому

    I've jetpack 4.6 installed on my 2 GB jetson nano and I've interfacd Rasbperry pi V2 CSI Camera.
    The issue which I am facing right now is the live execution of Thumbs task in free DLI course (sample programs).
    Nano is working fine while taking samples of thumbs up and down Infact it is training the neural network perfectly.
    But during Live execution for prediction purposes it is unable to determine whether I am holding thumbs up or down.
    I've been stucked to this matter months ago rather I've ran the same sample on my friends nano but i couldn't find a remedy.
    Will be waiting for beneficial reponse.

  • @a36538
    @a36538 5 років тому

    Really cool! Could you mate this with an rc car chassis? Could one make an autopilot rc car?

    • @ZacksLab
      @ZacksLab  5 років тому

      a36538 yes absolutely! This could control anything, an RC car, your car, a drone, heavy machinery, you name it. It’s just a matter of interfacing it properly to all of the sensor and actuators!

  • @angelleal517
    @angelleal517 3 роки тому

    Do you know the actual range of the camera ? how far can it detect objects from ? Great video by the way !

    • @ZacksLab
      @ZacksLab  3 роки тому

      i do not know the max detection range of this camera (its also a function of the size of the object). i have worked with high megapixel cameras capable of doing object classification out to 1km. of course this depends on your computer vision and post processing algorithms as well.

  • @sohaibarif2835
    @sohaibarif2835 5 років тому +1

    Important point, it DOES NOT support Wifi and Bluetooth out of the box. You need to purchase and install a module. Also, I just learned the hard way that power is an issue too. On mine, after installing the module, it will not turn on with the USB power.

    • @ZacksLab
      @ZacksLab  5 років тому +1

      Sohaib Arif, good point, I should have explicitly stated that. Are you using the m.2 key or a USB dongle? I have not had any power issues with the Intel WiFi/BT card. If you're using the dongle and the issue is due to power draw on VBUS, you could try setting the jumper to use power coming from the barrel jack which allows for up to 4A I believe. You'd have to adapt the output of your battery to this connector though.

    • @sohaibarif2835
      @sohaibarif2835 5 років тому

      @@ZacksLab I am using the M.2 key. Interestingly, I tried powering it via a portable USB phone charger that I know sends 5V/2A and it worked but it does seem to be slower now. You are right about the 4 A barrel jack, I will add that soon. Do you have any suggestions for a portable version of that config? I am mostly a software guy so I don't have much experience with the electrical stuff.

    • @ZacksLab
      @ZacksLab  5 років тому +1

      I would look for a battery that can source 5V up to 4A from a single port (I think the battery on the bill of materials for the jetbot can do 3A per port, which is likely more than enough although I haven't done a power study on the jetbot). Then, use a USB type A to barrel jack adapter like this one: www.bhphotovideo.com/c/product/1368294-REG/startech_usb2typem_3_usb_to_type.html/?ap=y&gclid=CjwKCAjwq-TmBRBdEiwAaO1enw753uFBGzvPy3oIlOcMy3uRFGAFWwvLlx5PHGL2FudDY-Jb9OE1qhoCOvAQAvD_BwE&lsft=BI%3A514&smp=Y
      Make sure you connect the J48 Power Select Header pins to disable power
      supply via Micro-USB and enable 5V @ 4A via the J25 power jack.

  • @lanliu9263
    @lanliu9263 5 років тому

    hi ,Zack ,it is an amazing jetbot. Could you share the brand and model of your motor driver and DC-motor ?

    • @ZacksLab
      @ZacksLab  5 років тому

      Hi lan liu, here are the links for the motor driver and motors:
      amzn.to/2FbcLmE (driver)
      amzn.to/2Ri28mY (motors)

    • @lanliu9263
      @lanliu9263 5 років тому +1

      @@ZacksLab thanks

  • @rksb93
    @rksb93 5 років тому

    hey as a beginner i had a question regarding your training data images, did you use augmentation in any form to increase the amount of images that you could have trained your NN on?

    • @ZacksLab
      @ZacksLab  5 років тому

      hi Surya, no I did not use any sort of augmentation (I believe you're referring to translations, rotations, etc...). I would be interested in seeing how this affects performance if there were a tool that would automatically grow a dataset using this technique. thanks for the question!

    • @John-nr1ez
      @John-nr1ez 5 років тому

      Hi Surya, the data augmentations that you can apply depends on the task. For the JetBot collision avoidance, the data augmentation only includes pixel-wise color distortion (brightness, hue, saturation, etc.). Horizontal flipping might also be appropriate for this task, since it doesn't matter whether we're blocked on the left or right. However, cropping, translations, and scaling change the perspective of the camera relative to objects, which would change the objective. For example, if we 'zoom' the image as a form of data augmentation, we would end up seeing some 'far away' objects as near by, which we would want to label as 'blocked', but it would falsely label with the original tag 'free'.

    • @rksb93
      @rksb93 5 років тому

      John makes sense, so in essence he can get twice the amount of data by flipping images on the vertical axis but any other form of augmentation is not worthwhile. Did I get that right?

  • @eliotacura9080
    @eliotacura9080 3 роки тому

    This might be a simple question but How do you transfer your dataset to the desktop pc, and transfer the trained model back to the Nano for demos ? I know you mentioned via Wifi but I'm kind of curious on a bit more depth explanation, Thanks.

    • @ZacksLab
      @ZacksLab  3 роки тому

      i use WinSCP to do secure file transfer. you can open a SFTP connection to the IP address and transfer files to and from your PC and the remote connection (in this case, the Jetbot)

  • @LakerTriangle
    @LakerTriangle 5 років тому

    What language would I have to learn to use the Nano? I always wanted to do something with face recognition.

    • @ZacksLab
      @ZacksLab  5 років тому +1

      You can do quite a bit with just Python and a familiarity with Linux. C++ is useful for OpenCV, but there is a python library that wraps the original opencv libraries into a library called opencv-python. You will sacrifice some run-time performance using this python wrapper instead of developing in cpp, but development in python is generally considered easier.

  • @evansyomu2879
    @evansyomu2879 4 роки тому

    Does the Jetson Nano take any camera that has a CSI(MIPI) interface?

    • @ZacksLab
      @ZacksLab  4 роки тому

      Yes, it supports MIPI-CSI2 cameras, here's an example for getting the raspberry pi v2 camera working with the nano: bit.ly/2oCborL

  • @shikharsharma1495
    @shikharsharma1495 4 роки тому

    How much time does it take to train the data set? The Jupyter bar shows "Busy" for the last 45 mins.

    • @ZacksLab
      @ZacksLab  4 роки тому

      With a 1070 ti gpu it took a few mins

  • @LouieKotler
    @LouieKotler 5 років тому

    Amazing content! I'm an aspiring electronics engineer and hope to be like you one day. Do you do this work professionally? Any tips for someone like me who wants to start working with software like PyTorch but only has a general understanding of statistics and calculus? Thanks.

    • @ZacksLab
      @ZacksLab  5 років тому +1

      Hey Louie, thanks for checking out my channel! Yes, I'm an electrical engineer working on collision avoidance systems for autonomous drones. At work my focus is mostly in hardware design but at home I like to explore other topics (like this). I'd recommend checking out some courses online, there's a course called "Practical Deep Learning with PyTorch" on Udemy that covers all the fundamentals (I'm not affiliated with the course author or Udemy in any way). Udemy usually has 90% off on their courses so look around for coupon codes -- don't ever pay the full price.

    • @SandeepGhoshaction
      @SandeepGhoshaction 5 років тому

      Awesome sir !
      How can I get in contact with you ?
      I am a beginner in electronics and Computer Vision.

  • @boogerrs1031
    @boogerrs1031 3 роки тому

    Hi zack! i got myself a jet bot but i'm having trouble with the training of the last layer of the AlexNet model. I moved the dataset over to my laptop and ran the code using my gpu but but it gave me this error
    CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
    I tried running it on the cpu

    • @ZacksLab
      @ZacksLab  3 роки тому

      hey! without seeing your code it will be hard to help you... do you have it in a github repo? i could take a look if so

    • @boogerrs1031
      @boogerrs1031 3 роки тому

      @@ZacksLab sorry i should've edited the comment cause i submitted it by accident without finishing and then i forgot to do it X.X my bad. anyway the code is the exact same one that you showed in the video, the same one on the jetbot github. copied and pasted it from github to a jupyter notebook on my laptop but it doesnt run the last cell where you create the new model based on the alexnet model and the dataset. if i run it on the gpu i get the error message i wrote in the previous comment. if i run it on the cpu i get another error message pointing to the line "loss = F.cross_entropy(outputs, labels)" in the last cell of the code saying that target 2 is out of bounds. the code is the exact same one as on the jetbot github, which is kinda weird cause everywhere i look on youtube everyone seems to have no issues with this collision avoidance program, meanwhile i'm having trouble running some code that is supposed to be good as it is. by the way thank you for replying!!!

  • @69megacock
    @69megacock 5 років тому

    how did you learn to make this thing? I am now studying engineer in Vietnam. this is awesome! could you please tell me how to learn about this area?

    • @ZacksLab
      @ZacksLab  5 років тому +3

      Hi stayangrystayfoolish, to be honest I just followed the instructions on the github. My background is in electrical engineering and I work for a company who is developing collision avoidance software and hardware for drones, so I already have some knowledge through my work experience.
      If you’re interested in AI I’d recommend getting the Jetson Nano, as it’s a good hardware platform to practice with, and probably the cheapest available. I’ve found that I learn best from doing projects, getting stuck, and then researching once I have a problem with some real world context.

  • @watcher9412
    @watcher9412 5 років тому

    Can you use the jetson mano to build a drone using the battery that you have

    • @ZacksLab
      @ZacksLab  5 років тому

      The Jetson Nano could be used onboard a drone for many different functions, however you’d still want to use LiPo batteries intended for use with motors, as the BLDC motors commonly found on drones can pull a lot more current than this battery is capable of providing safely.

  • @tmerkury2813
    @tmerkury2813 2 роки тому

    Any tips on how you collect data such as image, throttle, steering angle into a dataframe for machine learning?

    • @ZacksLab
      @ZacksLab  2 роки тому

      hey! yes, you generally need to work with an image sensor that has a sync or trigger pin that allows you to synchronize a frame capture with data from other sensors.

    • @tmerkury2813
      @tmerkury2813 2 роки тому

      ​@@ZacksLab Ohhh boy I got some learning to do, any tips on how to get started on that? So my plan is to use the following:
      -Nvidia Jetson Nano on the rc car to run the machine learning model
      - A basic computer camera to capture images
      -A built RC car with an ESC and Motor and a controller
      Is there any specific way to connect these tools to collect the data or will I need something special?
      Sorry for the complex questions here haha but any helpful directions would be appreciated! Or if you have videos on this I would love to watch. Thank you!

    • @ZacksLab
      @ZacksLab  2 роки тому

      have you chosen an image sensor? i would start with the datasheet to learn its different capture modes
      from there, define your sensors for steering position, throttle, etc... and figure out their interfaces. it's likely you can use the Jetson's GPIO or SPI/I2C (what ever the interface is) to orchestrate all the triggering of data. you'll then need to define some sort of data structure for storing the image data + sensor data.
      i doubt something like this exists exactly for your use case, so you'll have to write your own drivers and software for handling all of the above. depending on the image sensor and other sensors you chose, the drivers may actually already exist in the linux kernel, but you'll have to enable them. i don't have any youtube videos on how to do this, but basically you have to reinstall Jetpack and recompile the kernel, device tree, and modules. there really is no easy shortcut for doing this, you will have to go down the rabbit hole of linux.
      alternatively, you can add a microcontroller that orchestrates the frame sync with other data and pass the data over the jetson side of things and then handle it in software at that point, it won't be as high of performance given the latency through the micro, but if your frame rate is low, it probably won't matter.

    • @tmerkury2813
      @tmerkury2813 2 роки тому

      @@ZacksLab Thank you so much for your response I will keep all of these notes in mind going forward. It seems like I have a lot of work ahead of me and Nope I haven't pocked an image sensor yet but I certainly will soon to get started. If all goes well, in about 8 months, I'll have it done and I shall show you it. Thanks agaiN!

  • @erikschiegg68
    @erikschiegg68 5 років тому +2

    Talk on. I'm looking for a things recognition for blind people, so they can point with the head or hand and nano speaks what it sees. This would work quite out of the box with a speaker and camara connected, I hope. There are also those nice 3D mapping cameras, helping to map the blind peoples enviroment. An idea for you.

    • @ZacksLab
      @ZacksLab  5 років тому +1

      that's an interesting idea. having it speak what it sees would be relatively easy, the hard part would be accurately determining what the person is pointing at reliably from different camera angles and such.
      do you imagine that this device would just get placed somewhere in the room and as the person moves around and points to things it would respond (assuming the person and object are within its field of view)? or would the person hold the device and use it to point? the latter would be much easier, but the former could be solved too.

    • @erikschiegg68
      @erikschiegg68 5 років тому

      @@ZacksLab I imagine a hand wrist band with the camera and lateral blinding to get a narrow, defined angle of view. So it shoud be a portable system with accu pack. Maybe in a rucksack. You definitely do not want a fisheye camera for this task.

    • @mattizzle81
      @mattizzle81 4 роки тому

      There are Android smartwatches now that have two cameras on them, one facing up, and one looking out away from the hand. Tensorflow lite runs on Android. Personally I don't know why people bother with things like Jetson Nano, when modern smartphones and smart watches have so much capability now. Unless you are doing robotics, in which case these embedded devices have all the IO ports, etc.

  • @israelsoto123
    @israelsoto123 3 роки тому

    Where did you get wheels from? Or did you print them yourself?

    • @ZacksLab
      @ZacksLab  3 роки тому

      i believe these are the ones i bought: www.pololu.com/product/185
      the BOM on github has adafruit listed but they are always sold out, alternatively there are STLs available that you could print. hope that helps!

  • @sgodsellify
    @sgodsellify 3 роки тому

    How long does your jetbot last with a fully charged battery? You said you are using a 10 amp battery.

    • @ZacksLab
      @ZacksLab  3 роки тому

      average current draw is around 1A, so with a 10Ah battery you get close to 10 hours of run time. under full load the nano can draw 10W, so run time will be closer to 5 hours if you're doing a lot of compute.

  • @Flix-f6q
    @Flix-f6q 3 роки тому

    why is the camera movement so jittery? How did you do it?

    • @ZacksLab
      @ZacksLab  3 роки тому

      which camera? the one I'm filming with or the jetbot's?

  • @harrywillisdick8660
    @harrywillisdick8660 4 роки тому

    Can you do this on a open MV h7?

    • @ZacksLab
      @ZacksLab  4 роки тому

      It looks like OpenMV H7 is an ARM based computer vision platform. I have seen people implement neural networks on microcontrollers, but I would imagine that you will quickly reach its limitations. Also, you cannot take advantage CUDA or libraries and frameworks like TensorFlow, TensorRT, PyTorch, Keras, etc... unless you're developing for a system that can run Linux and has an NVIDIA GPU available to it (like in the case of the Jetson Nano).

  • @MixedBag562
    @MixedBag562 5 років тому

    Wow. Before I saw this video, I was like "man, I don't need a $250 wall avoiding robot. I have an ArcBotics sparki(a programmable robot with it's own C++ IDE)." Seriously, big difference. You might be like, "oh, it just avoids walls so what", but really, this is just something else. Anybody who is just scrolling through the comments, this is a MUST SEE. I hate the natural human inclination toward clickbait instead of valuable and worthwhile content like this. I wish more people would seek out what actually is fulfilling, and benefits their career long-term. Instead, they look for trash like "Spongebob licking the marble for 10 hours".
    But people are people, so here's a suggestion: make your titles more concise, and the thumbnail self-explaining(that is, not including terms alot of people don't get like 'jet bot'). Also, presentation is BIG. And I don't just mean presentation in the video, but also the thumbnail, AND the title. TKOR(grant Thompson) is really a pro at this, and that's how he gets so many views and followers. His content isn't inherently interesting; it's just the title, and his thumbnail.
    If you could make a video with
    1. Good content,
    2.A short, concise, self-explanatory thumbnail AND title that draws interest to someone new to your channel,
    you'd be unstoppable. Even novice [engineers, technicians, chemists ect.] like TKOR, NurdRage(I'll admit he's fairly advanced) and The Slow Mo Guys(really, I think those people hardly even know about the chemistry of the explosives they use) make good channels with basic information
    And TONS of followers.
    Dude, if you want to get even more money for better projects via youtube(ads, maybe sponsors) you've gotta get more relateable.
    No, I'm not saying you need to get super basic like the above youtubers mentioned, I mean you have to Explain and Draw people in such a way that you can attract impatient newbies looking for clickbait, then when they least expect it, shove something that is rare and valuable down their throats(knowledge, skill, circuit science, computer programming). Then, they realize that youtube isn't just instant gratification and click-happy pleasure; theres MUCH, MUCH more to it than that.
    Awesome work Zack! Your videos and those alike make UA-cam worth it. God bless!

    • @ZacksLab
      @ZacksLab  5 років тому

      Thank you FriedPickle, your comment and feedback means a lot to me. :)

    • @MixedBag562
      @MixedBag562 5 років тому

      ;)

  • @seadark2074
    @seadark2074 3 роки тому

    I'm fresh.How to use this model to recognize multiclass object? especially code

    • @ZacksLab
      @ZacksLab  3 роки тому

      you're trying to use a NN to identify objects? if so, start here maybe: ua-cam.com/video/2XMkPW_sIGg/v-deo.html

    • @ZacksLab
      @ZacksLab  3 роки тому

      this is also good: ua-cam.com/video/k5pXXmTkPNM/v-deo.html

  • @Dataanalyticspro
    @Dataanalyticspro 4 роки тому

    What program did you use for data collection and tagging so fast with the on board camera?

    • @ZacksLab
      @ZacksLab  4 роки тому

      Hey Jared! It was written in python and can be found here: github.com/NVIDIA-AI-IOT/jetbot/blob/master/notebooks/collision_avoidance/data_collection.ipynb

  • @adamjohnson9846
    @adamjohnson9846 3 роки тому

    incoming jetbot

    • @adamjohnson9846
      @adamjohnson9846 3 роки тому

      I went to it as a how to video and found it very entertaining

    • @adamjohnson9846
      @adamjohnson9846 3 роки тому

      and probably cuz there's not very many people doing raspberry or Alex Webb repos

    • @ZacksLab
      @ZacksLab  3 роки тому

      awesome, glad you liked it :)

  • @erwinn58
    @erwinn58 4 роки тому

    How long did it take for you to transver the data to your PC?

    • @ZacksLab
      @ZacksLab  4 роки тому

      I log into the Jetbot using WinSCP and transfer the files over SFTP. Took less than a minute... both my computer and Jetbot were near my router.

    • @erwinn58
      @erwinn58 4 роки тому

      @@ZacksLab aha! Today I tried transferring around 60 pictures. But when I was planning to download the 'dataset.zip' file in the collision avoidance demo, it said that I had no permission!
      I have no permission to download that zip file... Do you might have a clue?
      Thanks in advance

    • @ZacksLab
      @ZacksLab  4 роки тому

      What software are you using to transfer the file?

    • @erwinn58
      @erwinn58 4 роки тому

      @@ZacksLab well all these things happen in jupyter notebook. I always download my zip files with WinRAR.
      I heard it took a while for the jetbot to process all the pictures. But 3 hours later it still didn't worked..

    • @ZacksLab
      @ZacksLab  4 роки тому

      Hmm, if you're trying to transfer to a Windows PC, download WinSCP and you can use SFTP to transfer files to and from your Jetbot (use the Jetbot username and pw to login). If you're having issues locally on the Jetbot, it could be a Linux permissions issue, which you can adjust for that file with the terminal command: sudo chmod 777 /path/to/file.zip

  • @B0XMATTER
    @B0XMATTER 4 роки тому

    NOT THE BEER

  • @binwangcu
    @binwangcu 5 років тому +1

    2:04 ~ 3:40 is the part of data scientist's life which no one wants to take on :)

  • @syed5126
    @syed5126 5 років тому

    Can you list your pc's specs?

    • @ZacksLab
      @ZacksLab  5 років тому +1

      Sure, my PC is:
      Intel i7-4790K
      NVIDIA 1070Ti
      32GB DDR3
      500GB SSD

  • @liamdiaz7767
    @liamdiaz7767 3 роки тому

    Do you think it's possible to do same with a drone ?

    • @ZacksLab
      @ZacksLab  3 роки тому +1

      absolutely. just need access to the autopilot. pixhawk (and similar drone APs) can take external commands that could be coming from an AI system such as this.

    • @liamdiaz7767
      @liamdiaz7767 3 роки тому

      @@ZacksLab Is there any blog or site where I can read more about it ? I'm working on a project and would love to implement some of this.

    • @ZacksLab
      @ZacksLab  3 роки тому

      I'm not sure if there is one specifically for what you're talking about doing, but there is plenty of documentation on pixhawk autopilots for drones if you search for it. what drone platform do you intend to work with?

    • @liamdiaz7767
      @liamdiaz7767 3 роки тому

      @@ZacksLab I have an intel aero ready to fly drone, it comes with a pixhawk as the flight controller. I have seen some works with the same configuration but a raspberry pi is used as the companion computer. I would like to use the Jetson nano as the companion computer instead for the purposes of data collection & collision avoidance as you have shown here with the jetbot, but obviously in my drone.

    • @ZacksLab
      @ZacksLab  3 роки тому

      ah, got it. you have to look into the documentation for that platform, however, since you said it uses pixhawk you can likely use the jetson nano to send maneuver commands to it via serial or CAN.

  • @knarftrakiul3881
    @knarftrakiul3881 4 роки тому

    Just drive up and down local roads several times ..maybe throw in some deer models

  • @JuanPerez-jg1qk
    @JuanPerez-jg1qk 3 роки тому

    wow...it can play fetch the stick...just throw the stick it will chase it and run over it ..waiting next command master

    • @ZacksLab
      @ZacksLab  3 роки тому

      we have to be nice to our robots so they are nice to us once they become sentient ;)

  • @dubber889
    @dubber889 3 роки тому

    "I'll consider this a feature"
    typical programmer LOL consider a bug as feature

  • @DOSputin
    @DOSputin 4 роки тому

    Cat gives 0 fucks.