Identify and Measure precisely Objects distance | with Deep Learning and Intel RealSense

Поділитися
Вставка
  • Опубліковано 20 жов 2024

КОМЕНТАРІ • 105

  • @pysource-com
    @pysource-com  17 днів тому

    🔥Learn how to build your own AI vision solutions: pysource.com/community

  • @murtazasworkshop
    @murtazasworkshop 3 роки тому +10

    Nice Example

  • @AiPhile
    @AiPhile 3 роки тому +4

    That's great sir. ♥️
    I have also measured the distance from object to camera 📷 using simple webcam, just by detecting face and estimated distance.

    • @pysource-com
      @pysource-com  3 роки тому +2

      Good workaround. If you have a face actually by detecting the size of the iris you can get an accurate distance detection.

    • @AiPhile
      @AiPhile 3 роки тому +1

      @@pysource-com thank you so much sir ♥️. I will try to that as well.
      I must appreciate your efforts first, I learned a lot from this channel,

  • @labradoodlesilver3756
    @labradoodlesilver3756 Рік тому +2

    I'm gona use this for FRC

  • @somusundram1823
    @somusundram1823 3 роки тому +2

    Nice one. Just curios. Have you tried to measure the object size using mask-rcnn? Will it able to detect shape of an object (For example : In my case i am interested to know whether it can detect card boxes , like the one from couriers)
    I never worked in mask-rcnn due to time constraint. I have used depth image of RealSense to find object shapes but i rather like to have a reliable method like rcnn or YoLo to do it on various condition.

    • @pysource-com
      @pysource-com  3 роки тому +2

      If you train properly mask-rcnn it will get the shape of the Object.
      If you know the distance and the shape you can then also calculate the area and size of the object with a good accuracy.

    • @mohamadn6116
      @mohamadn6116 3 роки тому

      @@pysource-com Can you please elaborate more how to do that? If we know the distance and assuming the shape is a rectangle, how can I calculate the size of the rectangle?

    • @ТетянаГобатюк
      @ТетянаГобатюк 3 роки тому

      @@mohamadn6116 Good question. I explore it as well and still haven't found a method to find the size of the object when the distance to it is known. Maybe someone knows?

  • @mohamadn6116
    @mohamadn6116 3 роки тому +2

    Thanks for the video. I like your channel a lot! 2 questions please: 1) How can I measure the size of an object using a D455, and 2) How would I measure the distance between two objects/points in 3D space? Thanks!

    • @camdennagg6419
      @camdennagg6419 2 роки тому

      you can try using the pixel distance between the two and scaling that

    • @camdennagg6419
      @camdennagg6419 2 роки тому

      and then use trig to find the actual distance since you know the depth of both objects

  • @5zigen371
    @5zigen371 2 роки тому +1

    Hello Sir,
    I'm trying to send through http the depth video flux (my idea is to send RGB + depth flux to another machine which process everything) but when i try to send it, as every value is on a uint16 i have to convert it to uint8 if i want to send something otherwise i receive a cut value (only the first 8 bit from 0 to 255 and the others bit from 65536 to 256 or cut) so have you ever try to do something like this?

  • @hassanyoussef2960
    @hassanyoussef2960 7 місяців тому

    i´m seaking you guidence please. I just started with lidar and point cloud. i want to use them to locate object from a shelf( or for a exaple from a supermarket shelf) and grab them with a robot. what are the steps that i need to perform a such task. I need from the camera the location of the Object and the i have to pass this information to the robot... right?

  • @enesschebbaki1226
    @enesschebbaki1226 10 місяців тому

    is it possible to achieve the same result by using depth display and not RGB (colour stream) as in this case?

  • @hassanyoussef2960
    @hassanyoussef2960 10 місяців тому

    hi thanks for the great explanation. i´m having problem with frozen_interface_graph_coco.pb for some how it´s not been read by my computer and i can´t open it, so when i want to write mrcnn = maskRCNN i´m getting error. what do you recommend me to do?

  • @niranjansujay8487
    @niranjansujay8487 2 роки тому

    Hi @pysource, I have some doubts is it possible to get all the three parameters of an object in real-time similar to how you got distance information, I want to know the height, width, and thickness(length) of a detected object in 3d space using an intel realsense camera. Can you help me with this? currently, I am using YOLOv3/4/5 for object detection (I mean I know all the three) so ever you're okay with the W*H*L information.

  • @藍月-v1d
    @藍月-v1d 2 роки тому

    If I use intel SR300 can get same result?
    Or I should change other library

  • @jazzysehgal7543
    @jazzysehgal7543 Рік тому

    hello,
    from realsense_camera import *
    for some reason this import don't work with my pyrealsense2 package

  • @xinwenzhang4150
    @xinwenzhang4150 2 роки тому

    A really wonderful video, I got a lot from video. Thanks!!

  • @wahswolf88
    @wahswolf88 2 роки тому

    Excellent video, got me up to a basic understanding fast.

    • @wahswolf88
      @wahswolf88 2 роки тому

      Buying the complete courses was an easy decision.

  • @trongatbui967
    @trongatbui967 2 роки тому

    Thank you very much. May I ask how to accelarate the program with CUDA/CUDNN on Ubuntu? It seem that I cannot run the makrcnn detection in GPU although my laptop has an GPU.
    Hope to see you answer

  • @sy2532
    @sy2532 Рік тому

    Can you show how to use CUDA libraries for OpenCV for this project?

  • @BharathKumarThota-eg8jc
    @BharathKumarThota-eg8jc Рік тому

    Great Contents, can you let me know how to increase the speed of the detections or frames. i have cuda installed in my laptop and for yolo its working fine . But for this i am facing issue.

  • @myhofficiel4612
    @myhofficiel4612 Рік тому

    very useful video that explains efficiently how does this work

  • @muhammadtalhaejaz4115
    @muhammadtalhaejaz4115 Рік тому

    can you tell me what version of opencv you used?

  • @ShivangiKeshri-nb7wr
    @ShivangiKeshri-nb7wr Рік тому

    i am not able to get the confirmation email link from your website and because of that i am not able to download the file that made to run this? please resolve this issue

  • @EstevanCandido
    @EstevanCandido 2 роки тому

    Great video! I would like to know how to use the mask extracted from Yolact to measure wear on a metallic surface, can you help me on this path?

  • @namnguyenhoai8852
    @namnguyenhoai8852 2 роки тому +1

    why depth map is distance object to camera

  • @AndreiHirata
    @AndreiHirata 3 роки тому

    Whats is the best camera for do a ArSarndBOX?

  • @md.ashrafulalam1401
    @md.ashrafulalam1401 Рік тому

    I just love your videos and explanation :)

  • @SohamPadhyeM22RM007
    @SohamPadhyeM22RM007 Рік тому

    How to get the IMU readings from the inbuilt IMU in D455

  • @redhwanalgabri7281
    @redhwanalgabri7281 Рік тому

    How to measure the accuracy of distance from the object to the camera?

  • @RanjanS-h4i
    @RanjanS-h4i Рік тому

    Can I run this on a Raspberry Pi or a Beaglebone by any chance?

  • @보리타작-x9s
    @보리타작-x9s 2 роки тому

    Can i manage to make some contents realsense camera with "Unreal Engine" ? I've figured out it can be created with Unity, but there's is no information with UE4 :)

  • @sermadreda399
    @sermadreda399 Рік тому

    Great video ,thank you for sharing

  • @антонселиванов-и9о
    @антонселиванов-и9о 2 роки тому +1

    hi, how do I get frames from a file *.bag recorded with realsense??

  • @VirtualEducationLYF-dd1lh
    @VirtualEducationLYF-dd1lh 7 місяців тому

    i want to run this code in my laptop webcam what i do ,please tell

  • @richubini2129
    @richubini2129 Рік тому

    what if the rgb frame and the depth frame is having different resolution

  • @TravelwithRasel.
    @TravelwithRasel. 6 місяців тому

    hi, can not download the code file

  • @rashidabbasi6035
    @rashidabbasi6035 3 роки тому

    please some model you run , like liDar based detection, tracking, segmentation, and compression ..please make video on this , i am looking forward ....

  • @sohampadhye5408
    @sohampadhye5408 Рік тому

    Can this work on D455?

  • @pritammalusare7451
    @pritammalusare7451 2 роки тому

    hey Pysource
    thank you for these video. I want to implement same project on raspberry pi but real sense camera is much expensive. any other way??🙂

  • @mertolojis
    @mertolojis 3 роки тому

    How can I use the distance algorithm with my own detection algorithm?

  • @talhaejaz7651
    @talhaejaz7651 2 роки тому

    From where I can find the files not the code?

  • @jonparker8832
    @jonparker8832 2 роки тому +1

    im getting this error when i try to run
    C:\Users\Acer\AppData\Local\Microsoft\WindowsApps\python3.9.exe C:/test/measure_object_distance.py
    Loading Intel Realsense Camera
    Traceback (most recent call last):
    File "C:\test\measure_object_distance.py", line 7, in
    rs = RealsenseCamera()
    File "C:\test
    ealsense_camera.py", line 17, in __init__
    self.pipeline.start(config)
    RuntimeError: Couldn't resolve requests
    can someone help?

    • @rodrigodomingues8491
      @rodrigodomingues8491 2 роки тому +1

      Hi, I also got the same error and fixed it by changing the resolution of the camera in the realsense_camera.py
      config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
      config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
      hope this helps

  • @jordilopez9587
    @jordilopez9587 2 роки тому

    Hello Sergio it is posible this or code with Lidar R2000

  • @MatheusSilva-qm3ph
    @MatheusSilva-qm3ph 3 роки тому

    I like this program......👍👏

  • @ZhifanSong
    @ZhifanSong Рік тому

    i used both my personal and institution email account and i didnt receive any email so i cant download the files. is there a solution?

  • @Fools00
    @Fools00 2 роки тому

    sir, thank you for your great video.
    and i have a question.
    can i apply the same code which you linked, for use Intel® RealSense™ Depth Camera SR305?

    • @pysource-com
      @pysource-com  2 роки тому

      I haven't personally tested that camera but most likely it should work with the same code

  • @YigalBZ
    @YigalBZ 3 роки тому

    If I use a simple camera instead of the RealSense, can I still assess the distance with the following assumptions: 1) The camera location is fixed. 2) the object I am detecting is pre-known? I would think that in this case, the size of detected module can be translated into distance.

    • @pysource-com
      @pysource-com  3 роки тому +1

      Yes, a good way will be to use the Aruco marker.
      you can check this other video ua-cam.com/video/lbgl2u6KrDU/v-deo.html
      you will learn how to get the size of the object and you can adapt it to take the distance

    • @YigalBZ
      @YigalBZ 3 роки тому

      @@pysource-com Thanks ! Such a simple solution.

  • @kyteng
    @kyteng 2 роки тому

    Great video for distance measure.
    Is this possible to implement the same concept and use the Intel RealSense Depth Camera to check the smoothness/flatness for flat surfaces such as floor/ wall??

  • @liavbarnoy1237
    @liavbarnoy1237 2 роки тому

    Great video helped me a lot
    but I have trouble with installing pyrealsense2
    error: no matching distribution found for pyrealsense2
    I am using ubunto
    please help fix it

    • @pysource-com
      @pysource-com  2 роки тому

      I recommend to use python 3.8. And it should be on a Desktop computer, not Nvidia Jetsons or RAspberry as it's not available with pip install for them.

  • @شهدبخش-ه1ي
    @شهدبخش-ه1ي 3 роки тому

    hello
    Its a great project.
    Can you please, illustrate which cv2 library or any other technique should I use to make the center stable and the depth accurate ? or a hint to figure out by my self.
    Thank you Sir.

    • @pysource-com
      @pysource-com  3 роки тому +1

      There are different approaches we could use, I'll give you a couple of tips:
      - either you try taking a bigger area instead of just 1 point at the center. You could take more points (like an area of 10x10 so 100 points) and get the average of them
      - or you should implement this with Object Tracking so that the bounding box would be stable following the object.

    • @شهدبخش-ه1ي
      @شهدبخش-ه1ي 3 роки тому

      Thank you so much

  • @petersobotta3601
    @petersobotta3601 3 роки тому

    Will this work on a Jetson Nano? .. any chance of a tutorial on that if it does? Great Channel, keep up the awesome work!

    • @pysource-com
      @pysource-com  3 роки тому

      Nope, you will need at least a Jetson Xavier to make this work, plus you will need a lighter segmentation algorithm.
      On jetson nano I would go with YOLO l realsense + Int(instead of Mask rcnn)

    • @petersobotta3601
      @petersobotta3601 3 роки тому

      ​@@pysource-com​ Shame, the Nano is a great device for giving most of your CV tutorial stuff a try. Thanks for the reply👍

  • @MuhammadBilal-jp5ye
    @MuhammadBilal-jp5ye 3 роки тому

    Great video , can we use the same technique using Pi camera?

    • @pysource-com
      @pysource-com  3 роки тому

      Nope, you need a Depth camera for this

  • @danielbell7483
    @danielbell7483 3 роки тому

    Great video. How would I measure the distance between two objects/points in 3D space?

    • @camdennagg6419
      @camdennagg6419 2 роки тому

      you can try using the pixel distance between the two and scaling that

    • @camdennagg6419
      @camdennagg6419 2 роки тому

      and then use trig to find the actual distance since you know the depth of both objects

    • @danielbell7483
      @danielbell7483 2 роки тому

      Thanks @@camdennagg6419 . In the end I used functions .get_depth_frame() and .get_distance() (in x and y) on aligned frames, then used trig.

    • @camdennagg6419
      @camdennagg6419 2 роки тому

      @@danielbell7483eyy nice job. It's nice when something works out haha.

  • @marosmartin7762
    @marosmartin7762 3 роки тому

    Great video!

  • @vishalrawat953
    @vishalrawat953 Рік тому

    Bro i m not able to download source code

  • @rashidabbasi6035
    @rashidabbasi6035 3 роки тому

    Dear can you use Lidar camera to do this please ...

    • @pysource-com
      @pysource-com  3 роки тому

      I might do that with LIDAR in the future

    • @Jay1n9
      @Jay1n9 2 роки тому

      yes definetly, I tried it on Intel realsense L515

  • @hussamhaij6238
    @hussamhaij6238 3 роки тому

    Can these files be used for Intel realsense L515?

    • @pysource-com
      @pysource-com  3 роки тому

      They should work, as the library is the same for all the intel realsense cameras

    • @Jay1n9
      @Jay1n9 2 роки тому

      yes it works

  • @garceling
    @garceling 3 роки тому

    Can this work on the raspberry Pi too

    • @pysource-com
      @pysource-com  3 роки тому +1

      Nope, raspberry pi is too weak to handle object segmentation.
      On raspberry pi you could alternatively use Mobilenet object detection + Intel Realsense

    • @garceling
      @garceling 3 роки тому

      @@pysource-com any chance on releasing a tutorial on how to do this. LMAO i am stuck :( Thank you

  • @olalekanisola8763
    @olalekanisola8763 3 роки тому +1

    Great tutorial, I tried running
    from realsense_camera import *
    rs = RealsenseCamera()
    but I get an error
    Traceback (most recent call last):
    File "C:/Users/owner/PycharmProjects/Yolo/yolo.py", line 4, in
    rs = RealsenseCamera()
    File "C:\Users\owner\PycharmProjects\Yolo
    ealsense_camera.py", line 19, in __init__
    self.pipeline.start(config)
    RuntimeError: Couldn't resolve requests

    • @머지-t7m
      @머지-t7m 3 роки тому +2

      realsense_camera.py Line 13
      config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
      config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
      Replace them as below,
      config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
      config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

    • @olalekanisola8763
      @olalekanisola8763 3 роки тому

      ​@@머지-t7m wow, thank you so much @@머지-t7m, it works

    • @Jay1n9
      @Jay1n9 2 роки тому

      thanks

    • @jonparker8832
      @jonparker8832 2 роки тому +1

      how do you solve cant see the reply

    • @sean9734
      @sean9734 2 роки тому

      @@jonparker8832 HI, adjust resolution in following to (640, 480). This solved the problem in my case.
      config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
      config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

  • @gulagprescription9993
    @gulagprescription9993 11 місяців тому

    Cool

  • @SLAYERSARCH
    @SLAYERSARCH 2 місяці тому

    expensive

  • @jaeyounglee574
    @jaeyounglee574 2 роки тому

    Your example is not working. below is just working. What is difference? My enviroment is jupyter notebook of windows 10 with anaconda.
    --------------------------------------
    # Setup:
    pipe = rs.pipeline()
    cfg = rs.config()
    #cfg.enable_device_from_file("../object_detection.bag") 기존예제코드
    config = rs.config() # 추가한 코드
    config.enable_record_to_file('test.bag') # 추가한 코드
    profile = pipe.start(cfg)
    # Skip 5 first frames to give the Auto-Exposure time to adjust
    for x in range(5):
    pipe.wait_for_frames()

    # Store next frameset for later processing:
    frameset = pipe.wait_for_frames()
    color_frame = frameset.get_color_frame()
    depth_frame = frameset.get_depth_frame()
    # Cleanup:
    pipe.stop()
    print("Frames Captured")
    -----------------------------------

  • @aaryadeb893
    @aaryadeb893 2 роки тому

    How could we train the MaskRCNN on custom pictures or dataset

    • @pysource-com
      @pysource-com  2 роки тому +1

      you can do that by following this tutorial ua-cam.com/video/WuvY0wJDl0k/v-deo.html

    • @chiryvan7095
      @chiryvan7095 2 роки тому

      @@pysource-com hi, thank you for your great video. How is the .h5 format model used in this project.