Jetson Nano Custom Object Detection - how to train your own AI

Поділитися
Вставка
  • Опубліковано 6 лип 2024
  • Do you want to detect your own objects using a Jetson Nano? Then this is the video for you. In this video I will show you how I've captured a set of robot images using a camera attached to the Jetson Nano, labelled them, trained a Single Shot Detection Network (MobileNet SSD), and then used to detect robots in a live video stream.
    The Face Detection using OpenCV video is here: • Face Detection using O...
    For more information, tutorials, parts and more visit:
    www.smarsfan.com​
    To join the membership at bronze, silver or gold levels, head over to
    www.smarsfan.com/membership
    Enjoy this video? Buy me a coffee!
    www.buymeacoffee.com/kevinmca...
    Follow me on Instagram - @kevinmcaleer
    / kevinmcaleer
    Follow me on Twitter - @kevsmac
    / kevsmac
    Join the Facebook group - Small Robots
    / smallrobots
    Music by Epidemic Sounds
    www.epidemicsound.com/referra...
    My Code on GitHub:
    www.github.com/kevinmcaleer
    Chapters
    00:00:00 Intro
    00:00:44 Session Overview
    00:01:18 What we're shooting for
    00:02:31 Computer Vision Models
    00:03:35 What is Object Detection
    00:04:16 Model Preparation Process
    00:05:42 Pascal VOC
    00:06:30 Create a labels file
    00:07:36 Capture Assets
    00:08:59 Training and Testing
    00:10:43 How does Machine Learning Work
    00:12:31 Image Processing
    00:12:45 Neural Networks
    00:13:18 How does Machine Learning actually work though?
    00:14:05 Single-Shot Detection Network
    00:18:23 Train the model
    00:19:05 Things I found out the hard way
    00:20:07 Followed the Hello AI World from Nvidia website
    00:20:52 Demo
    00:34:02 Commands used
    00:42:50 Annotations file - XML Format
    00:45:20 Training the model
    0052:33 Q&A Session
    #JetsonNano​ #Python​ #Robotics
  • Розваги

КОМЕНТАРІ • 38

  • @jumill87
    @jumill87 2 роки тому +1

    This is awesome, using Jetson Nano for my college senior project and dong AI object detection to detect whether someone is sitting in a chair or not.

  • @kenthemachinist4886
    @kenthemachinist4886 2 роки тому +1

    Very good break down

  • @RoboTecs
    @RoboTecs 2 роки тому +1

    Excelent! Thanks for share!

  • @cmacks95
    @cmacks95 2 роки тому +2

    This is a great reference video, im trying to identify the location of objects from an aerial view of about 30 feet up in "cells". Luckily all my objects land in a specific 6x6 foot square area at all times when they need to be detected. Would I be able to use less training images given the specific conditions of my vision area?

  • @tk5782
    @tk5782 2 роки тому +2

    For the bird table, could you combine detection, tracking and edge detection to identify the bird from the front/side, then detect when it turns and add those frames to the set of images for learning with the bounds and classification specified automatically? Great video by the way, I knew nothing about ML before this video, now I'm chomping at the bit and ready to get started!

  • @igoralves1
    @igoralves1 Рік тому +1

    Thanks for the video. I would definitively pay for a deep course in AI/NVIDIA.

  • @BikingChap
    @BikingChap Рік тому

    Fascinating video, thank you. Sorry for the probably obvious question (I’m trying to get my head around the basics!) Once you have training images and the network has been trained on them does the network gain further ‘experience’ as you expose it to images for identification or is it’s knowledge now set in stone? Many thanks!

  • @GAment_11
    @GAment_11 2 роки тому

    Thanks for the video. One question---have you learned how to run the custom model from a python 3 script (not just from terminal)? I am attempting to adapt the sample tutorial from "Jetson AI Fundamentals S3E4" provided by NVIDIA (its on youtube and a good reference), which runs a pre-trained model from a 10 line python script. That said, it would be very useful to do the same for custom models.

  • @TheArcanis87
    @TheArcanis87 2 роки тому +1

    The raspicam V1.3 didn't work for me either. Apparently the jetson nano only support the raspicam V2 (sensor IMX219) out of the box. I was able to get a regular USB camera to work.

  • @jardelvieira8742
    @jardelvieira8742 Рік тому

    How can I plot and visualize the train and test charts? I want to do is to show the results of training (“loss” and “accuracy”) into a graphic (“loss” vs" epochs" / “accuracy” vs “epochs”) and create a" confusion matrix".

  • @tk5782
    @tk5782 2 роки тому +3

    Have you done any work with sensor fusion, using multiple input types to build a model? For example image and audio? Or even visible light and IR cameras?

    • @kevinmcaleer28
      @kevinmcaleer28  2 роки тому +1

      I’ve not done any sensor fusion, yet!

    • @tk5782
      @tk5782 2 роки тому

      @@kevinmcaleer28... "yet" 😁😁❤️

  • @SuperLefty2000
    @SuperLefty2000 2 роки тому +1

    Hi Kevin, if i have a large dataset of thousands of images. How can you train that? Using the asset capture tool will take forever to complete? Is there any tool for that?

    • @kevinmcaleer28
      @kevinmcaleer28  2 роки тому

      Thats the bit that can't be done automatically unfortunately. I'm not aware of a tool that can do that!

  • @helmijani7501
    @helmijani7501 6 місяців тому

    Hi kevin, i subscribe you. how to make a program in 28:17 using csi camera?

  • @snowphrall2116
    @snowphrall2116 Рік тому +1

    this examples can run with the 2gb ram version?

  • @jardelvieira8742
    @jardelvieira8742 Рік тому

    How can I train a model in Colab or on a GPU computer and then use the model on Jetson Nano?

  • @coolchap22
    @coolchap22 2 роки тому +1

    Hi Kevin, trying to inference more than 25 objects in an image.. Any idea how that limit can be removed?

    • @kevinmcaleer28
      @kevinmcaleer28  2 роки тому

      I'm not sure where the limitation exists, that might be a tensorflow thing

  • @sylvesterthethird4985
    @sylvesterthethird4985 Рік тому

    Hello after training my custom datasheet and exporting it to onnx it give me an error ssting OSERROR: couldnt fund valid .pth checkpoint under 'models/TuodMango'

  • @FranchLorilla
    @FranchLorilla 2 роки тому +1

    Hi what streaming app did you use and slides for the presentation?

    • @FranchLorilla
      @FranchLorilla 2 роки тому

      you have a beautiful presentation

    • @kevinmcaleer28
      @kevinmcaleer28  2 роки тому +1

      I use Apple Keynote for slides and Ecamm Live for live-streaming

  • @SijuManuel
    @SijuManuel 2 роки тому +1

    Is it possible to do transfer learning from video instead of camera?

    • @kevinmcaleer28
      @kevinmcaleer28  2 роки тому

      It would need to be still images for transfer learning, so you could use video as long as you can freeze frame it to outline the image to be captured. I don't think there is a way round the hard work required to build the library of images for learning.

  • @Hybrid.Robotics
    @Hybrid.Robotics 2 роки тому +1

    The term "grok" came about from the book Stranger In A Strange Land. 'PITA' means Pain In The Ass. :) ;) Now, you have learned two new terms you can use. ;) I expect to hear you use both of these real soon. ;0

  • @baehr4308
    @baehr4308 Рік тому

    I am getting that same error that you ran into at 49:46 , but you did not go through any solution for it. Anyone get this issue? i cannot find it online

  • @shivkumar-no6nk
    @shivkumar-no6nk 2 роки тому +1

    hi kevin McAleer san i can able to execute for my custom data but i want to stream the detection in rtsp vlc can you help me for getting rtsp

    • @kevinmcaleer28
      @kevinmcaleer28  2 роки тому

      the best way to get help is to join our discord group - lots of smart people in there who can help: action.smarsfan.com/join-discord

  • @Hybrid.Robotics
    @Hybrid.Robotics 2 роки тому +1

    If you are not making mistakes you have no opportunity to learn. ;) Hurry up and make a misteak so you can get on to the next one! ;)

  • @jardelvieira8742
    @jardelvieira8742 Рік тому +1

    Did you use power point?

  • @cullenbuteau2143
    @cullenbuteau2143 2 роки тому

    Can you make a skill were we can listen to music

  • @brianfette8947
    @brianfette8947 2 роки тому

    Watched 45 minutes just for him to fail. Not worth it.

    • @kevinmcaleer28
      @kevinmcaleer28  2 роки тому

      Fail? Can you explain - the demo works fine

    • @tk5782
      @tk5782 2 роки тому

      Welcome to the world of software engineering! :) The demo was great, even if some of the commands didn't execute