Distance (Angles+Triangulation) - OpenCV and Python3 Tutorial - Targeting Part 5

Поділитися
Вставка
  • Опубліковано 1 жов 2024

КОМЕНТАРІ • 373

  • @ClaytonDarwin
    @ClaytonDarwin  5 років тому +26

    Here is the code I used: gitlab.com/duder1966/youtube-projects/-/tree/master/OpenCV/triangulation I just checked this code March 11, 2022. Read the notes.

    • @muthukumars50
      @muthukumars50 5 років тому

      Hi Darwin , while running the above code am getting error like "Corrupt JPEG data: 2 extraneous bytes before marker 0xd0" . Any suggestion to fix this issue

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому +1

      I've seen something similar. I think the problem is that OpenCV doesn't like the data format coming from your camera (not a Python issue). I have a camera that causes OpenCV to print an error similar to that, but it doesn't cause any problems. If the error isn't fatal, you may be able change the level of error reporting in OpenCV, or you maybe able to re-direct the stderr. Try it with a different camera if you have one and see if it changes or goes away.

    • @meghashyamjoshi1094
      @meghashyamjoshi1094 4 роки тому

      @@muthukumars50 i am getting same error can you please help me to sort this out

    • @meghashyamjoshi1094
      @meghashyamjoshi1094 4 роки тому

      @@ClaytonDarwin i am actully getting same error with logitech c270 camera can u tell me how to fix this issue

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      You need to write a small python opencv script just to test the camera. Once you can read from the camera you can use those settings for the big script. Maybe use the script from the motion capture video.

  • @polyglotdev
    @polyglotdev 4 роки тому +3

    One of the best videos of OpenCV no doubt. Simple but powerful.

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      Thanks for taking the time to say that. 👍

  • @subhajit201
    @subhajit201 3 роки тому +4

    U got the angle measurements wrong bcoz there is distance between camera lens and sensor. This is proven by the fact that u can see beyond the lines close to the camera. What u got to do is mark 2 points on each side. Join the points with a line and extend it towards the camera. The 2 lines from each side will intersect at a point. That is your effective sensor position. U got to measure the angle at that point between the 2 lines. This will give u the correct field of view, which will be less than what u measured currently.

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому +4

      Give us a link to your video so we can see how you did it.

    • @subhajit201
      @subhajit201 3 роки тому

      @@ClaytonDarwin I hv not done it. Im just saying frm the mathematics point of view, which could improve ur systems accuracy.

    • @Dylan-kw8pz
      @Dylan-kw8pz 3 роки тому +5

      I saw that too. You gotta put the camera on the paper itself then mark how wide you can see say 12 inches away then do the same thing for 6 inches away. Connect the dots to form the lines. You should not be able to see outside your field of view. In the video you see the field of view lines in the frame

    • @texasfossilguy
      @texasfossilguy Рік тому +1

      @@Dylan-kw8pz make a tutorial please

  • @pandamiami80s82
    @pandamiami80s82 4 роки тому +2

    Thank you. Very informative! Can't find any better explanation

  • @nguyenquocbao3812
    @nguyenquocbao3812 4 роки тому +2

    you said in the video at 21:00 "we didnt do all the fancy camera calibration" but I think all the stuffs you adjust from 11:30 to 13:20 and the test after that from 15:50 to 19:00 is another kind of "fancy calibration" actually. The difference is they do with algorithm code and you do manually. Thanks for the great video anyway

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +3

      Fancy just means I don't know how to do it yet.

  • @naeemullahkhan6965
    @naeemullahkhan6965 3 роки тому +1

    Thanks for nice work. I have understand about the distance calculation from camera to the object using two cameras.
    Now I have following things and doing flowing steps
    1-Calibrating the cameras
    2- Rectified the frames
    3- Calculate the Distances for each objects Using Your Approach
    Now I have implemented and changed your code for multi-object detection and distance calculation for each object from the cameras like you have done for single object.Now I am trying to calculate the inter-distance between the objects by getting the three coordinates X,Y,Z as you calculated for single object.After calculating the 3D XYZ coordinates for each object, I have calculated the euclidean distance using their calculated XYZ coordinates.First of All , is it right approach ?. Secondly I have problem related to change in orientation, While I rotate the objects without change relative position, distance get change between them.Why it so.Can you guide me about what is done wrong by me? I am looking forward for your response.Thank You

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      I'll respond to your email as soon as I'm able.

  • @sablezubshruz9811
    @sablezubshruz9811 4 роки тому +1

    Awesome!!! So detailed explanation. Thank you SO MUCH!

  • @alexanderbermudezcastaneda5788
    @alexanderbermudezcastaneda5788 4 роки тому +1

    Dear Clayton, this very interesting tutorial is what I had been looking for; Now I want to create my own code with the mathematics that is in the video but two doubts have arisen I hope you can help me:
    1. Following the tutorial from minute 5 ... What if the point is to the left of the frame between 0 and 320 px? I made the calculation of the angles using complementary angles, the formula that I use if it is on the left or right would be the following:
    a = number of pixels that the point moves away from the center (half of the frame "320")
    x = arctan (a / d (calculated distance to frame, same as the video))
    having the theoretical angle x and knowing the angle of view of the camera in my case 60 I did the following operation:
    angle sought = 180 ° - (30 ° (angle of the middle of the frame) + 60 ° (angle where the camera has no vision) + x °)
    but I'm not sure about it.
    2. It is still not clear to me why there are angles with respect to X and Y, would it not result in two different distances? or how the 4 angles are related.
    Thank you very much in advance for your time and I hope there are no very inconsistent questions.

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      In my example, there are four (4) distances being calculating. The distance from each camera to the object (the hypotenuse of each right triangle) on plane XY, and the distance from the X axis between the cameras to the object (the common sides of the triangles) on plane XY. And finally the hypotenuse of a triangle in 3D space from the object to a point between the cameras (compensating for the angle above the horizon).
      You can do the math in several ways. However, they are all based on the angles from the cameras, and the distance between the cameras. Those are your reference measurements. You should be able to calculate distance to any point identifiable in both camera frames. Work your method out on paper and check that it works on known values/examples. You may have to get your trigonometry books out. Then it will work when you code it.

  • @parullakhan1528
    @parullakhan1528 4 роки тому +1

    Thanks a lot for making it. I was finding exactly that type of video; finally got it. you are very great as you are sharing your technique with us. can you make more videos on stereo vision.

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      Thanks. I'll try to make more as soon as I have time.

  • @hernandobolanos798
    @hernandobolanos798 2 роки тому +1

    I have replicated this project step by step and works perfectly. I keep evaluating if I can implement some of these ideas on a project that I am conceptualizing. Thanks a lot for share. I agree if the cameras are quite similar in performance the angles strategy would be the final calibration.

    • @ClaytonDarwin
      @ClaytonDarwin  2 роки тому

      Cool. Check out the auto-measurement video (it's newer). It has a different way of calibrating the cameras (one camera) that might help. At some point I will merge the two projects, but I haven't had time lately.

  • @sathishbabu3867
    @sathishbabu3867 4 роки тому +1

    i got this error plz help.......AttributeError: 'NoneType' object has no attribute 'get'

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      You haven't provided enough information from the error message.

  • @hikmetcankoseoglu5439
    @hikmetcankoseoglu5439 5 років тому +1

    Hello , I am trying to make a school project by using using this system and I want to contact you. Is the any mail adress that I can use for communication

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      You can get my email from my UA-cam page.

    • @hikmetcankoseoglu5439
      @hikmetcankoseoglu5439 5 років тому

      @Clayton Darwin I actually could not find your email, can you please write it

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Use the "About" tab. There is a view email address link.

  • @christiantheriault3139
    @christiantheriault3139 4 роки тому +1

    Hello Clayton, thank you very much. I'm trying to fully grasp/visualize the principles. We compute, x, y an z distances (coordinates) in space from the cameras.
    From the verso or your sheet of paper, using the horizontal axis on the frame, we get angles A and B (cam A and cam B). Then from the recto of your sheet of paper using A and B we get the distance "d", which would be the distance (coordinate) from camera to the object along the "x" axis in space ? Then using the vertical axis on the frame, you find new A and new B and compute (recto) new "d", which would be the distance (coordinate) from camera to object along the "y" axis in space ? So "d" is computed twice, once for x in space and once for y in space ? If so, how to you compute "z" ?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +2

      The first step is as you said. Get the horizontal angles from each camera to the object and use these to compute the distance from the cameras to the object. You can then use that distance and the vertical angle from one of the cameras to locate the object on the vertical plane. That is enough to locate the object in 3D.

    • @christiantheriault3139
      @christiantheriault3139 4 роки тому

      @@ClaytonDarwin I see ! Thank you.

  • @tudormuntean3299
    @tudormuntean3299 3 роки тому +1

    Gosh, why don't I learn this in school?

  • @Dsenpai-xm2zx
    @Dsenpai-xm2zx 4 роки тому +1

    Hi Clayton, i still dont get it, why you should calculate the Y? Dont you still got the distance using only X?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      In two dimensions the distance from the center point is the hypotenuse of the triangle with base x and side y.

  • @浆果橙橙
    @浆果橙橙 5 років тому +1

    hi Clayton, thank you for your video. I am wondering if it is possible to do this project on a raspberry pi with two usb camera or pi camera?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Yes, the code should work fine. However you may have to slow the frame rate down to keep the processor from running at 100% all the time and lagging. I'd start with 10 FPS, and see how that works.

    • @浆果橙橙
      @浆果橙橙 5 років тому

      @@ClaytonDarwin Thank you

  • @pranavravuri8717
    @pranavravuri8717 4 роки тому +1

    Sir, how did you localize the point in the two images? Is it segmentation or something else?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      In this case, I'm using the centroid of the largest contour in each frame. But you could do it another way, such as selecting the largest contour in one frame, then use images recognition to find the nearest match in the other frame. Or maybe a combination of both.

  • @김재홍-u8f3k
    @김재홍-u8f3k 4 роки тому +1

    thanks, excellent example of triangulation. I have one question. I use camera lens and it has barrel distortion. If I use this method, lens distortion must removed ?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      Yes. There are some functions in OpenCV that can help with lens distortion. How much effort depends on how much accuracy you need. My cheap setup works fine without distortion correction for what I do.

    • @김재홍-u8f3k
      @김재홍-u8f3k 4 роки тому

      Clayton Darwin thanks!

  • @CannibalWarthog
    @CannibalWarthog Рік тому +1

    How far away does the accuracy work? like in the center walking away

    • @ClaytonDarwin
      @ClaytonDarwin  Рік тому

      Accuracy will depend on camera pixel density and quality of calibration.

  • @graystudios497
    @graystudios497 4 роки тому +1

    Hey hi...can we do 3d mapping by using webcams with raspberry pi

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      Yes. I haven't done much with it, but there is no reason you can't. Processing speed might be an issue.

  • @elyakimlev
    @elyakimlev Рік тому +1

    Thanks for the video.
    I'm using object detection with YOLO. Would I need to detect the same object with 2 cameras separately before trying this triangulation technique?
    I am able to detect objects with one camera without issues, but how would I make sure the object being detected by the 2nd camera is the same one the 1st camera detected? (if there are multiple objects in the scene)

    • @ClaytonDarwin
      @ClaytonDarwin  Рік тому +2

      Yes. I would identify the object in one camera, then select a central area in the object, then find the nearest match to that in the other camera. You could also limit the search area in the second camera to the expected location. For example, the object should have about the same Y axis location in both cameras.

    • @elyakimlev
      @elyakimlev Рік тому +1

      @@ClaytonDarwin Thank you. I'll try it once I get myself a dual-camera set up.

    • @SomnathDas-bt1hi
      @SomnathDas-bt1hi Рік тому

      @elyakimlev Were you successful in achieving that? If so, could you please guide me?

  • @ammeriedem2084
    @ammeriedem2084 5 місяців тому

    Hi Clayton, I'm planning to use a single camera in my project to detect various points of interest like banks and pharmacies and calculate their 3D coordinates to update the spatial database. Since the camera will be in motion, I'm wondering if your method is applicable and how I can adjust the code accordingly . Just to note, the camera's position is always known via GPS .

    • @ClaytonDarwin
      @ClaytonDarwin  5 місяців тому

      Judging the distance from a single camera is very different from the method being demonstrated here. If you knew an accurate GPS location and azimuth to the target from two different places, you could assume two cameras and apply the same process.

  • @funenglish8128
    @funenglish8128 5 років тому +1

    if we secure 2 cameras on the car , let say distance between them will be 1.5m . is it gonna work? I mean is that how tesla recognize object on camera? first object detection , then distance detection to the detected objec with method showing on your video, and then just calculating speed? did i get it right.

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Theoretically, sure, but for longer distances being that far apart. Make sure the alignment is right.

    • @funenglish8128
      @funenglish8128 5 років тому

      @@ClaytonDarwin thx ) that is the princeple how tesla drive ?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      I don't really know. They may use LIDAR or something. Probably more than one system.

    • @funenglish8128
      @funenglish8128 5 років тому

      @@ClaytonDarwin ok thanks . one more thing if i do this with distance like few miles? for detecting and calculation distance to the ship at sea. How u think on distance over 2 miles it will work ok>?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      At that distance, the angle between cameras will be very small, maybe undetectable between the two camera frames (not 1 pixel different). Probably won't work.

  • @meghashyamjoshi1094
    @meghashyamjoshi1094 4 роки тому +1

    Finally it worked for me on logitech c 270 stereo vision on opencv thank u so much

  • @KKMaity
    @KKMaity 4 роки тому

    is it possible to know x and y co ordinates using one camera and is it possible to know angle between two co ordinates with respect to horizontal or vertical line in 2d xy plane/image

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      As long as you know one measurement, like the distance from the camera, or the length of an object in the frame, you can calculate other angles and distances in the frame. Mostly trigonometry. There will be some deviation from the lens distortion.

    • @KKMaity
      @KKMaity 4 роки тому

      @@ClaytonDarwin objects would be circular I mean circle..circle sizes known .. background page size is known..only want to know x and y value of the centre of any Circle using one camera which is in top view

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      Yes, opencv can easily find circles and tell you the center point in pixels. If you know the size of the circle, then you could convert pixels to another measurement scale.

  • @ammeriedem2084
    @ammeriedem2084 5 місяців тому

    Excellent work ! . I have a question : Are the X, Y, Z, and D coordinates in pixel units?

  • @tanmaydeshpande2409
    @tanmaydeshpande2409 3 роки тому

    Sir, I am working on a project where I am creating an Object Detection model for Autonomous Vehicles in different weather conditions. Here, depending on the information in the image, I want to vary the intensity of the weather condition filter to augment the images. The images have roads, cars, traffic signs, buildings, etc the typical Autonomous Vehicle dataset. The dataset that I am working on has images from the dashboard of the car. Now, if a car is closer, the intensity of the filter is directly proportional to the distance between the camera and the objects.
    What should be my approach or any specific steps to achieve this ?

  • @rodrigomayans9092
    @rodrigomayans9092 8 місяців тому

    Hello Darwin. Excellent video, and thanks for sharing this great work. I have a question that I don't know, they already asked it and you answered it. This code, what kind of object does it detect? For example, if you recognize a ruler, a hand, any object, or what it seems to me, could it be something yellowish? I would appreciate your response. Greetings.

    • @ClaytonDarwin
      @ClaytonDarwin  8 місяців тому

      In this case I'm just detecting motion. But with opencv you can also detect objects, shapes, and colors.

  • @langoonasse771
    @langoonasse771 Рік тому

    So a quick question. Are angles A and B for triangulation in the X and Y directions just the angle values you get from taking the inverse tangent of pixels from center over focal length?

  • @pythonner3644
    @pythonner3644 4 роки тому +1

    i think another method would be to find height ranges at some distances. It may be possible

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      Sure. This is just one way to do it when you have no references.

    • @pythonner3644
      @pythonner3644 4 роки тому +1

      @@ClaytonDarwin okay I get it. Some people on Facebook have done it by taking refrences and ratios and that would change everytime camera is adjusted.

  • @lucasxas
    @lucasxas 6 місяців тому

    Hi Clayton, is it possible to use this approach with cameras with different FoV angles?

  • @laluprasad508
    @laluprasad508 3 роки тому

    Will a couple of stereo cameras be sufficient for the cricket ball tracking application? Also how am I supposed to accurately superimpose the 3d coordinates obtained on a 3d model of the pitch?

  • @laluprasad508
    @laluprasad508 3 роки тому +1

    I’m working on a tennis ball tracker for the application of cricket. A widely renowned example would be the hawkeye. I wanted to know whether placing the two cameras wider apart would improve or decrease the accuracy.

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому +1

      In general, 1) wider camera spacing improves accuracy, and 2) having the target in the center of the frame improves accuracy. You have to balance the two.

    • @laluprasad508
      @laluprasad508 3 роки тому

      Thanks for the quick response Clayton! Would you be willing to suggest a tech/hardware stack (workflow) if I sent you a 3d model of the place where it’s going to be planted? I would be eternally grateful!

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      I don't have the qualifications/experience to do that.

    • @texasfossilguy
      @texasfossilguy Рік тому

      you need 4 cameras at each corner of the court. If you did 6 with two at half court, you could get position but use pairs to average position and account for errors at the farthest point between them which (would be the center of the court for 4 or the center of each players half for 6 cameras). Higher MP cameras would increase accuracy, so 6 100MP cameras would give you the most accuracy, and averaging the 4 cameras (the two pairs on each side of one half court) would give you the position on that half court and help minimize errors.
      They sell a 100mp pi cam from arducam for $400, so itd cost ~$3200 to build the highest resolution system you could. Then keep tinkering with it.

  • @huangwanzhen3168
    @huangwanzhen3168 5 років тому

    hi, Clayton ! great work, but I am so curious after you calibration the two cameras, you built coordinate system in your frame center, and I could get the L/R camera's angle and the distance between the camera, but if I move the target, the angle with L/R camera should be changed, and the distance will also change. How could you realize the target coordinate system by your calculation, and I was also using Python and OpenCV to do my project, but I was trouble on this part and I want to use kalman filter to implementation motion detection...???

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      If the cameras are in fixed positions relative to each other, and you can determine the angles to target, then you can calculate distance. Then you can determine target position in 3d space relative to the cameras. And if you know the position of the cameras, you can locate the target relative to anything else. If the target moves, or if the cameras move, recalculate. I'm recalculating with every frame.

  • @ritvikdayal3735
    @ritvikdayal3735 5 років тому

    thanks a lot sir. The best thing about the video is it do explain the small but necessary details we should keep under consideration

  • @RoboSidekick
    @RoboSidekick Рік тому

    best explanation so far. It will help our robotic project. thank you

  • @robins341
    @robins341 Рік тому

    This is great! Question, though: what if the cameras are not fixed? For example, what if they are both pan-tilt-zoom cameras, tracking the object (i.e., always in the center of the camera frames)?

    • @ClaytonDarwin
      @ClaytonDarwin  Рік тому +2

      This code would not work. However it could be done if you knew the pan/tilt angles and the distance between the cameras. A lot more complicated.

  • @nimalankarthik2284
    @nimalankarthik2284 4 роки тому

    really appreciate the video,
    i am running into an issue while trying to run your code , can you help me out?
    if cv2.getWindowProperty('Left Camera 1',cv2.WND_PROP_VISIBLE) < 1:
    AttributeError: module 'cv2' has no attribute 'WND_PROP_VISIBLE'

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      I was running on Linux. What are you using? Not every option works for every operating system.

  • @mehmetgul8686
    @mehmetgul8686 4 роки тому

    Dear Clayton, first of all, thanks for your tuts, I try this code but I get some error "[ WARN:1] global C:\projects\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (674) SourceReaderCB::~SourceReaderCB terminating async callback
    " can you help me about this
    I catch image just only for one second then it is removed. can you share me your email can I send you this error, thanks a lot

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      Look in my "about" tab and you can get my email. Or look me up on linkedin.

  • @hgm994
    @hgm994 5 років тому

    @Clayton Darwin, hi thanks for the video it is very informative, i just want to ask that is it possible to reconstruct the 3D image . i am using the multi stereo cameras and from that i am getting the image which i have to reconstruct the 3D image and separate the object from the Background. looking forward to your answer. Thanks

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      There is a built-in function/method. See opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html

  • @buichien5290
    @buichien5290 4 місяці тому

    Hello Darwin, can you change the motion detection, instead of detecting any motion, we change it to detect motion at a certain point on the hand, hope it's okay. Please help, fix the code and update it in my comment section. thank you so much

    • @ClaytonDarwin
      @ClaytonDarwin  4 місяці тому

      It could be done. I don't have time to do it.

    • @buichien5290
      @buichien5290 4 місяці тому

      @@ClaytonDarwin Please help me, I'm working on a project to graduate from a university in Vietnam

  • @buichien5290
    @buichien5290 3 місяці тому

    Hi Darwin. Why is it that when I actually check, Y and Z show the correct value and Z shows the wrong value?

    • @buichien5290
      @buichien5290 3 місяці тому

      The X coordinate origin is moved to the right camera

    • @ClaytonDarwin
      @ClaytonDarwin  3 місяці тому

      This is a program I wrote one weekend 5 years ago. I don't have time to support it. You're just going to have to figure it out yourself.

  • @tarekamami6021
    @tarekamami6021 3 роки тому

    What a great job Man , thanks , you taught me something new ,👍🏻

  • @dprobotix
    @dprobotix 5 років тому

    thanks for the video, good information, I have one question, what does the values in x, y tells us? . could you explain me a bit briefly?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      The X Y and Z are the distances from the center point between the two cameras along the respective axes. Together, using the Pythagorean theorem, the give us distance D to the target.

  • @gundeepsingh4871
    @gundeepsingh4871 3 роки тому

    Hello sir, I need your help I am making a project in which we detect multiple objects with distance estimation using a single camera for visually impaired persons. If you can guide me I am really thankful. Please reply to me ASAP!. Thanks, sir

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      Detecting distance from one camera will be difficult. You will have to know the size of the target object to estimate distance.

  • @angelsandemons
    @angelsandemons 2 роки тому

    line 120, in run
    frame2 = ct2.next(black=True, wait=1)
    line 445, in next
    frame = self.buffer.get(timeout=wait)
    AttributeError: 'NoneType' object has no attribute 'get'
    Any help?

    • @ClaytonDarwin
      @ClaytonDarwin  2 роки тому +1

      If frame is None, your camera can't be read. Probably you have the wrong address.

  • @manjurao
    @manjurao 4 роки тому

    Awesome Claton, really helpful video. I am new to image processing using OpenCV and aware of using Python, currently, I am working on a small proof of concept in Golf area to find the distance between the flag pin (hole) and the ball(s) through the image's taken from high resolution camera. Currently, I am using only one camera to take the snaps frequently and will be using for image processing. At any point of time, the camera and the flag pin (hole) are fixed length and the only variation will be with ball movement. Camera, flag pin (hole) and ball will be forming a triangle and finding a way to get tan@ to get the distance from camera to flag pin and use this reference to get hold from camera to ball to find distance between flag pin (hole) and ball now. Please can you help me on how to go about it. Really appreciate, if you can help to solve this puzzle for me (using Python and OpenCV) :).

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      The problem is that you probably won't be a fixed distance from the ball. Even if you are a fixed distance from the hole, that won't help you locate the ball accurately because the green isn't required to be a horizontal plane. Since the ball is a specific size, you may be able to gauge distance based on size on camera frame if you have a super high resolution camera, but some kind of triangulation will probably be required. Locating the points on a triangle requires 3 measures, side angle side, or angle side angle. You have to figure out how to control for those. Might be easier to get distance with a laser range finder.

  • @laluprasad508
    @laluprasad508 3 роки тому

    I’m working on a tennis ball tracker for the application of cricket. A widely renowned example would be the hawkeye. I wanted to know whether placing the two cameras wider apart would improve or decrease the accuracy.

    • @texasfossilguy
      @texasfossilguy Рік тому

      the accuracy limit is the camera resolution

  • @coder6238
    @coder6238 5 років тому

    Thank you very much for the information you provided. I have a question. Is it possible to get 3d dimension from the cameras placed on top of the soccer goal? Something like picture in this link. i.ibb.co/Mh5Q1jp/cameras.jpg

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому +1

      Sure. It should work if you can get the image recognition to work fast enough for sports.

  • @IgorSwxy
    @IgorSwxy 3 роки тому

    I have read that synchronization between 2 cameras may affect the disparity map and distance measures. Did you have any problems with your cameras?

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому +1

      I haven't done anything that requires high accuracy yet, so I haven't had any problems. But certainly I can see that that could be an issue. I did have to run the cameras in separate threads, which allows basic synchronization. Otherwise they would be way off.

  • @skvallab
    @skvallab 4 місяці тому

    @claytonDarwin : Great explanation, i have a doubt, if it's just X,Y i'm bothered about, i'm better off with just one camera right?

    • @ClaytonDarwin
      @ClaytonDarwin  4 місяці тому +1

      X and Y change depending on Z. If Z is fixed, then one camera will work. See my other video on measurement. I have an example.

    • @skvallab
      @skvallab 4 місяці тому

      @@ClaytonDarwin i checked your video, but can i work without having a known distance to calibrate in the screen?

    • @ClaytonDarwin
      @ClaytonDarwin  4 місяці тому +1

      You have to have a known measurement in the formula.

  • @Engineer1010
    @Engineer1010 4 роки тому

    Thanks for making this video! Very inspirational. Just one question; how would camera calibration improve this setup? Would you get more accurate results, for example? Or would the setup be more robust to errors in the camera alignment?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +2

      In my latest project (see the laser tracker) I'm exploring this topic. I think that yes, calibration will help. But I'm also going to do some manual checks with a printed chart. The trick is getting the most accurate angles on which to base all the other measurements. With my new project I'm having trouble with parralax that I have to solve first.

  • @devfromthefuture506
    @devfromthefuture506 4 роки тому +1

    Why distance measure is.so slow

  • @indyit8699
    @indyit8699 3 роки тому

    Thanks, this vdo show me how machine know object distance.

  • @alimohsen7071
    @alimohsen7071 5 років тому +1

    Hello. Thank you for the wonderful work it's excellent work. I have one question when the code is set the show frames black color for one second then then disappeared to try to solve the problem but did not succeed

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      It sounds like your cameras (you need 2) are not set up correctly. Read the exception error string. What does it say?

    • @alimohsen7071
      @alimohsen7071 5 років тому

      There are three first mistakes in this line Targets1 = Targetter1.targets (Frame1)
      and the second error in this line
      frame3, Contors, hierachy = cv2.findcontors (frame3, cv2.RETR_EXTERNAL,cv2 .CHAIN_APPROX_SIMPLE)an comment it is
      VALUE ERROR: Not ENGO VALUES TO WPPUC (expected 3, got2) and Print DONE and print WARN: 1 Terminating async callback

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Your version of OpenCV is returning 2 rather than 3 values from the findcontors function. You'll have to change the code to match what is returned. I don't know what that is because my version returns 3 items.

    • @alimohsen7071
      @alimohsen7071 5 років тому

      I use the latest version of OpenCV 3.4 and the latest version of Python and use "Pycharm" to work in the windows 10 .. is your opinion of the solution to this problem Is not changing the or you need another thing .... please help me because a project that is a graduate of the university depends on this code.thanks

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      You need to run the findcontours function by itself and determine what it returns. I use Linux. Not Windows, so I can't tell you. The code works for me. Once you know what it returns, change the code accordingly so that it works for you. Keep looking at the errors, isolating the problem line, and making changes. That's how you troubleshoot.

  • @faizaali895
    @faizaali895 4 роки тому

    From 15:48 in the video, you measure the distance between the cameras' centre-points with a ruler. When I do that, I get a slightly different distance when measuring close-up compared to further away. What factors might be causing it? And why is it necessary to get the same distance no matter where you measure?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      Theoretically the cameras should be parallel for the math to work out correctly. The more accurate you can be, the better your results will be.

  • @MrAnti3z
    @MrAnti3z 5 років тому

    Great thanks for such a wonderful work and outcomes. It'll be very cool if there will be a chance to share the code.

  • @imaneakr46
    @imaneakr46 3 роки тому

    Sir, can you help me and guide me to find out a method to measure width/height of an object through an image with a single camera? is it even possible?

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      ua-cam.com/video/1CVmjTcSpIw/v-deo.html

  • @meghashyamjoshi1094
    @meghashyamjoshi1094 5 років тому

    this is error on ubuntu 18.04 plz help me to solve
    python3 -u targeting_tools.py
    VIDEOIO ERROR: V4L2: Could not obtain specifics of capture window.
    VIDEOIO ERROR: V4L: can't open camera by index 1
    /dev/video1 does not support memory mapping
    VIDEOIO ERROR: V4L: can't open camera by index 2
    Traceback (most recent call last):
    File "targeting_tools.py", line 129, in run
    frame1 = ct1.next(black=True,wait=1)
    File "targeting_tools.py", line 455, in next
    frame = self.buffer.get(timeout=wait)
    AttributeError: 'NoneType' object has no attribute 'get'
    DONE

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому +1

      Your cameras are not working correctly. They are probably not set up with the right parameters. Start by getting one camera to work in opencv by itself. Once you can do that, use those parameters to set up the cameras in the targeting tools script.

    • @meghashyamjoshi1094
      @meghashyamjoshi1094 5 років тому

      @@ClaytonDarwin thank u so much i am willing to use logitech 270

  • @meghashyamjoshi1094
    @meghashyamjoshi1094 5 років тому

    i have issue
    /targetingtools.py", line 129, in run
    frame1 = ct1.next(black=True,wait=1)
    /targetingtools.py", line 455, in next
    frame = self.buffer.get(timeout=wait)
    AttributeError: 'NoneType' object has no attribute 'get

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому +1

      Why is self.buffer == None? That's the problem. Did you delete it? Did you call the camera thread start() function? Did it start the thread correctly? Did you name your camera correctly (correct address)?

    • @meghashyamjoshi1094
      @meghashyamjoshi1094 5 років тому

      @@ClaytonDarwin can u tell me how to use self.start() in program call

  • @anwarawad6695
    @anwarawad6695 5 років тому

    hello , actually your work is very nice , but i want to ask you can we measure the distance but for one camera not two ? Thanks in advance

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      If you know the exact size of the target ahead of time, you can estimate the distance using one point of view. Otherwise, two points of view, with a known distance between them is required.

  • @riseiot
    @riseiot 5 років тому

    Thank you for this amazing video.
    I need to ask what is the maximum distance range in this project ?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      I don't know yet. Still working on it. I haven't taken it outside.

  • @alponurkaracayalumni7111
    @alponurkaracayalumni7111 2 роки тому

    hello sir thanks for information i just want to ask this.I calibrated camera with chess table so is it okay to use this code with camera which is calibrated chess board ? Also in this shared code does it include camera calibration and distance(angles +triangulation) or just distance(angles+triangulation) ?

    • @ClaytonDarwin
      @ClaytonDarwin  2 роки тому

      This code depends on knowing the viewing angle of the camera. That is the method of calibration. As long as you do that, you should be okay.

  • @buichien5290
    @buichien5290 3 місяці тому

    Hi Darwin. The camera returns 2 angles, which are the horizontal angle and the angle, right? How do you know it will return to any angle?

    • @ClaytonDarwin
      @ClaytonDarwin  3 місяці тому

      They are based on the center of the frame.

    • @buichien5290
      @buichien5290 3 місяці тому

      @@ClaytonDarwin From there, how my angle is oriented towards the horizontal angle, help me how to do it in specific detail

    • @ClaytonDarwin
      @ClaytonDarwin  3 місяці тому

      It's all in the code. I don't currently have time to revisit this project.

  • @ducks-on-quack
    @ducks-on-quack 5 років тому

    I can't get your code to run. It's throwing
    `
    File "c:/Users/.../Desktop/gdrive.py", line 130, in run
    frame2 = ct2.next(black=True,wait=1)
    File "c:/Users/dvkbp7/Desktop/gdrive.py", line 455, in next
    frame = self.buffer.get(timeout=wait)
    AttributeError: 'NoneType' object has no attribute 'get'
    `
    Did anyone else have this issue?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      I don't recognize the gdrive.py script? What is that from?

    • @meghashyamjoshi1094
      @meghashyamjoshi1094 5 років тому

      yes i have same issue
      File "c:/Users/Meghasham Joshi/Documents/targetingtools.py", line 129, in run
      frame1 = ct1.next(black=True,wait=1)
      File "c:/Users/Meghasham Joshi/Documents/targetingtools.py", line 455, in next
      frame = self.buffer.get(timeout=wait)
      AttributeError: 'NoneType' object has no attribute 'get

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Why is self.buffer == None? That's the problem. Did you delete it? Did you call the camera thread start() function? Did it start the thread correctly? Did you name your camera correctly (correct address)?

  • @RafaelIliasov
    @RafaelIliasov Рік тому +1

    Hi!

  • @varunvora816
    @varunvora816 4 роки тому

    I did not understand what the distance between the camera and the screen is , that entire part. What is the screen you are talking about and how do you calculate the angle from that?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      It's a hypothetical frame derived from the camera angles and camera resolution. You need that to get the hypothetical distance from the camera to the plane where the pixels are, so that you can calculate the angle.

  • @manarmahmalji9680
    @manarmahmalji9680 3 роки тому

    very useful video. thank you !

  • @ChrisDembinsky
    @ChrisDembinsky 2 роки тому

    What about using a single camera, such as one of these ua-cam.com/play/PLIfSLL2wFuaqaT_r3uMl_n7BEM6Llbizz.html then using know locations in the image to estimate the location of other pixels in the view? I have been trying to find an example of this but no luck yet.

    • @ClaytonDarwin
      @ClaytonDarwin  2 роки тому

      Check the auto measure video. Similar.

  • @nathanielbulawan6582
    @nathanielbulawan6582 4 роки тому +1

    how to solve for the Z value?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +2

      Use trigonometry with the variables that you have, like I did in the video.

  • @HuyLe-zi1sz
    @HuyLe-zi1sz 3 роки тому +1

    Can I use 1 camera to calculate all the value X, Y, Z, Dis?

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      Generally speaking, no. There are some tricks you can use for special situations. Like my measurement video.

    • @HuyLe-zi1sz
      @HuyLe-zi1sz 3 роки тому

      @@ClaytonDarwin thank you Darwin. Hopefully your idea helped me find the coordinates of the tomatoes on the tree :D :D

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому +1

      If you know the distance from the camera to the target, you can use just 1 camera. See my auto-measuring video.

    • @HuyLe-zi1sz
      @HuyLe-zi1sz 3 роки тому

      ​@@ClaytonDarwin No, I need all the value x, y, z, and distance and then add to the function that calculate inverse kinematic of my 6 axis robot arm

  • @harishp6611
    @harishp6611 4 роки тому

    Awesome.!! love from India.

  • @durgeshkulkarni7497
    @durgeshkulkarni7497 4 роки тому

    hello sir, by using this code can we detect objects like yolo detects and find the distance

  • @wkwkzl
    @wkwkzl 4 роки тому

    Traceback (most recent call last):
    File "", line 113, in run
    frame1 = ct1.next(black=True,wait=1)
    File "", line 206, in next
    frame = self.buffer.get(timeout=wait)
    AttributeError: 'NoneType' object has no attribute 'get'
    i get this error code,could help me?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      Your camera is probably not connected. Make sure you are using the correct address.

    • @wkwkzl
      @wkwkzl 4 роки тому

      videoio(MSMF): OnReadSample() is called with error status
      videoio(MSMF): async ReadSample() call is failed with error status:
      error i can't solve error

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      @@wkwkzl Get the camera.py code from my original targeting video. Make sure you can run both of your cameras with that, one at a time, then try the dual camera setup.

  • @Dsenpai-xm2zx
    @Dsenpai-xm2zx 4 роки тому

    hi Clayton thanks for making this video, i wonder what if i want to detect a car and measure the distance

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      The idea is the same, but the process will be more complicated. You will probably need to add a object recognition layer to limit detection to cars, and you will likely have to deal with multiple objects at once. It might get messy, and you may need a powerful processor to get a good frame rate.

  • @Dsenpai-xm2zx
    @Dsenpai-xm2zx 4 роки тому

    Hi Clayton sorry for asking. I'm getting error, the code worked but the video just popout for a second and exiting itself. The error code is,
    Traceback (most recent call last):
    File "C:/Users/Asus/PycharmProjects/liveStream/targeting_tools.py", line 133, in run
    targets1 = targeter1.targets(frame1)
    File "C:/Users/Asus/PycharmProjects/liveStream/targeting_tools.py", line 557, in targets
    frame3,contours,hierarchy = cv2.findContours(frame3,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    ValueError: not enough values to unpack (expected 3, got 2)
    DONE
    [ WARN:1] global C:\projects\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (674) SourceReaderCB::~SourceReaderCB terminating async callback
    [ WARN:2] global C:\projects\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (674) SourceReaderCB::~SourceReaderCB terminating async callback
    appreciate it if you could help me out.
    Thanks a lot

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      You are using a new/different version of OpenCV. The output from cv2.findContours has changed. Go to line 557 (line the error says) and change "frame3,contours,hierarchy" to "contours,hierarchy".

    • @Dsenpai-xm2zx
      @Dsenpai-xm2zx 4 роки тому

      @@ClaytonDarwin hi thanks, I'm Sorry btw for not read the description on the video first.

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      Don't worry about it. I should really replace that code.

  • @rohitmohite8707
    @rohitmohite8707 3 роки тому

    I am preety new to this , how can we get those green Horizontal and Vertical lines in the camera view as shown in video? Thank You..!

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      OpenCV has drawing functions for lines, circles, text, etc. that you can use to modify frames.

    • @rohitmohite8707
      @rohitmohite8707 3 роки тому

      @@ClaytonDarwin oh okay, great Thank You

  • @shehadehd1
    @shehadehd1 5 років тому

    Hi! Thanks for posting the update. Really great stuff!
    I was wondering, is this demo running directly off of the Rock64 board? I'm hesitating to pull the trigger on that board, but if you're getting this level of performance using two usb cameras on the Rock64 then I might just have to make an impulsive late-night purchase haha.
    Thanks ahead of time and keep up the great work!

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      This is running from my PC where I was working on it that evening. Unfortunately, I got distracted from this project by some mechanical changes I want to make (and not having the time to make them) so I haven't got it going on the Rock64 yet and don't know how it will perform. I hope to get back to it soon.

    • @shehadehd1
      @shehadehd1 5 років тому

      Clayton Darwin thank you for the fast response. I’ve watched some of your other videos and they’ve already convinced me to pull the trigger on the Rock64. Hopefully I’ll have an update in the near future regarding its performance.
      I’m looking forward to your next video. Good luck on your changes!

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Thanks. Looking forward to the Rock64.

  • @meghashyamjoshi1094
    @meghashyamjoshi1094 4 роки тому

    Traceback (most recent call last):
    File "targeting_tools1.py", line 133, in run
    targets1 = targeter1.targets(frame1)
    File "targeting_tools1.py", line 557, in targets
    frame3,contours,hierarchy = cv2.findContours(frame3,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    ValueError: not enough values to unpack (expected 3, got 2)
    DONE
    help me to solve this error

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      The version of opencv you are using returns 2 values for findContours. The version I use returns 3. That's what the error says. You need to determine what values findContours returns and set that line accordingly. Probably doesn't return hierarchy, but I don't know. I'm running using Debian Linux and Python3.6.

    • @dulquerkawsar
      @dulquerkawsar 4 роки тому

      hi ,you can solve this error

    • @meghashyamjoshi1094
      @meghashyamjoshi1094 4 роки тому

      @@dulquerkawsar check your opencv version it should be 3.4

    • @dulquerkawsar
      @dulquerkawsar 4 роки тому

      @@meghashyamjoshi1094 Thanks

  • @joshikunnam8782
    @joshikunnam8782 3 роки тому +1

    hi sir, so is it possible to use this method and detect the distance of multiple objects, if so can you make a tutorial on it

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      This does not work for distance. I have another video on using triangulation to estimate distance.

    • @joshikunnam8782
      @joshikunnam8782 3 роки тому

      @@ClaytonDarwin could you please send link sir

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      Sorry. This is the distance video. I thought this comment was from another video. You can track as many objects as you like, but you will have to come up with a different method to target them. I'm using movement of the largest object, but you could use a different method, like color or shape detection, object recognition, etc.

    • @joshikunnam8782
      @joshikunnam8782 3 роки тому

      @@ClaytonDarwin ok sir so thank you for replying

    • @joshikunnam8782
      @joshikunnam8782 3 роки тому

      @@ClaytonDarwin and sir what is minimum distance that the cameras have to be from each other

  • @mahamokrani6397
    @mahamokrani6397 2 роки тому

    Great Tutorial, thank you !!

  • @rustcohle9134
    @rustcohle9134 Рік тому

    Excellent work!
    i have a question actually need your help. can i measure between two points via this method in real world scale?

    • @rustcohle9134
      @rustcohle9134 Рік тому

      evreka :)

    • @ClaytonDarwin
      @ClaytonDarwin  Рік тому +1

      I have another video about measurement. You would have to combine the two methods. It would be complicated.

    • @rustcohle9134
      @rustcohle9134 Рік тому

      @@ClaytonDarwin i guess i dont need it. cosine theorem will work...

  • @AngelusMortis1000
    @AngelusMortis1000 5 років тому

    Could you do a tutorial on depth mapping with openCV?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Email me and I'll send you the code.

  • @meghashyamjoshi1094
    @meghashyamjoshi1094 5 років тому

    Hello sir we can implement this on Raspberey pi 4 b

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому +1

      Yes. It may run slower (fewer frames per second) but it should work fine.

  • @benjaminalex1748
    @benjaminalex1748 3 роки тому

    can this code be used for depth maping
    could u do a tutorial on it

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      The same idea can be used for depth mapping. However, this uses motion to locate objects in the camera frames. You would have to use another method to identify objects. I probably won't have time to do a video about that for a while.

  • @gabrielepi.3208
    @gabrielepi.3208 2 роки тому

    I have a question. How do you understand how much is the distance between the cameras in pixels? How do you create a unit of measurement if you have pixels and cm/m ?

    • @gabrielepi.3208
      @gabrielepi.3208 2 роки тому

      How did you get this? camera_separation = 5 + 15/16

    • @ClaytonDarwin
      @ClaytonDarwin  2 роки тому +1

      Pixels are used to get angles. Angles and the inter camera distance are used to get measurements in units.

    • @ClaytonDarwin
      @ClaytonDarwin  2 роки тому +1

      With a ruler. Imperial inches.

  • @YazeedAlkosai
    @YazeedAlkosai 4 роки тому

    is it possible to measure the dimensions the feature of the product

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      Yes. Probably so, depending on what features. You could probably just use one camera if you set it at a fixed distance or the product is a known size.

  • @aasthagupta2772
    @aasthagupta2772 3 роки тому

    Sir can you help me in calculating the angle of object in the image wrt anything like x axis y axis or any other object?

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      I don't have time for any new projects at the moment. However, you can use this code as an example. Angles to the target ate part of the calculation.

    • @aasthagupta2772
      @aasthagupta2772 3 роки тому

      @@ClaytonDarwin sir you are referring to which code?

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      The code referenced in the description.

  • @lassen11
    @lassen11 4 роки тому

    Where is your maximum range of distance?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      Accuracy (or range) depends on the camera lens and pixel density, and on the distance between cameras. Here I'm using cheap vga cameras about 6 inches apart. Seem pretty good to 20 feet inside. Haven't had it outside yet.

  • @faizaali895
    @faizaali895 4 роки тому

    I am using HSV filter to track an object, but i am finding difficulties in finding the pixel position values of the object I am tracking. How can I find these pixel values?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      I think an HSV filter only separates the color. It doesn't provide a location.

    • @faizaali895
      @faizaali895 4 роки тому

      @@ClaytonDarwin Is it not possible to find the location after putting the HSV filter on? Do I have to make the program find the contours of the object first?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +2

      Yes. The HSF filter will isolate the color, but you will still have to identify a target and find a contour using other methods.

    • @faizaali895
      @faizaali895 4 роки тому +1

      @@ClaytonDarwin I will try that :) Thank you!

  • @kohdev2488
    @kohdev2488 3 роки тому

    hi mr when i run your code i get these following errors can you help me :
    File "C:/Users/18736/PycharmProjects/Projet1076/tes1.py", line 129, in run
    frame1 = ct1.next(black=True, wait=1)
    File "C:/Users/18736/PycharmProjects/Projet1076/tes1.py", line 455, in next
    frame = self.buffer.get(timeout=wait)
    AttributeError: 'NoneType' object has no attribute 'get'

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      I don't think your cameras are set up correctly. Make sure you can use your cameras via OpenCV before you try this program.

    • @kohdev2488
      @kohdev2488 3 роки тому

      @@ClaytonDarwin ,Yes I usually use my camera with opencv, but I have only one camera that's 0 that I use every time

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому +1

      @@kohdev2488 You need 2 cameras.

    • @kohdev2488
      @kohdev2488 3 роки тому

      @@ClaytonDarwin thank you it works with 2 cameras, how to adapt a camera, because I need the distance from an object for my project

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      @@kohdev2488 Triangulation requires 2 cameras.

  • @FalconSmart
    @FalconSmart 3 роки тому

    There is no need to make the 2 camera looking parallel. This can be calibrated.

    • @HuyLe-zi1sz
      @HuyLe-zi1sz 3 роки тому

      Hi, there
      How can you get all the value X, Y, Z and Distance by just only 1 camera?

    • @FalconSmart
      @FalconSmart 3 роки тому

      @@HuyLe-zi1sz then you need to move the camera with known (calibrated) speed.

  • @황광어
    @황광어 5 років тому

    Nice! i want study this skill!! can i get a chance to share the code.

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Send me an email.

    • @황광어
      @황광어 5 років тому

      Clayton Darwin ghl92479@gmail.com thanks!

  • @ikhsanrahman9703
    @ikhsanrahman9703 4 роки тому

    nice, from X Y Z value that obtained, how to reconstruct that image to 3d ?

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      In this case, you're only getting a single point based on movement. To construct a 3d image, you would need many points. There are some functions in OpenCV that can help with that.

    • @ikhsanrahman9703
      @ikhsanrahman9703 4 роки тому

      @@ClaytonDarwin well, thank you, any way, do you know how to generate point cloud from 2d image using laser triangulation with camera ? it's more like, have to find depth (Z axes) of object, so, we can generate 3d image. thank you in advance

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      I haven't tried it yet.

  • @ayarzuki
    @ayarzuki 3 роки тому +1

    Where the research paper you use?

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому +1

      I'm just making it up as I go along. You can find most of the ideas in Wikipedia.

    • @ayarzuki
      @ayarzuki 3 роки тому

      @@ClaytonDarwin What is the keyword to find those ideas in Wikipedia??

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      Triangulation. Resection. Intersection.

  • @alimohsen7071
    @alimohsen7071 4 роки тому

    Hello Mr. Clayton ...
    I want to have the code calculate the distance for a specific object such as (a person or a car) using the haar cascade function. I am currently successful in making the program calculate the distance for the specified target but the problem is that when any other unwanted target moves (like a cat or Dog or anything) inside the image, the program also measures the distance ... and I just want it to measure the distance for the predetermined target, how can I disable the distance measuring feature for the moving object.

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +2

      Use object recognition to locate your target in one camera frame. Then use a sub-frame of just the target area to locate the target in the other camera frame (OpenCV function). Then use the triangulation method. In other words, don't use motion to detect targets.

    • @alimohsen7071
      @alimohsen7071 4 роки тому

      I have not come to a good result ... I still have many problems ... Could you explain me more like where to put exactly the code (Haar Cascade) in the main code .... thank you

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      You are going to have to write a new program. You can use mine as an example, but it won't be as simple as just replacing one part. You can also use my code from the facial recognition video.

    • @moazamjalil5071
      @moazamjalil5071 4 роки тому

      Salaam Ali Mohsen Can you give me your code

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      In the description and first comment.

  • @haytamdz1157
    @haytamdz1157 5 років тому

    cool

  • @yigitgenc1734
    @yigitgenc1734 4 роки тому

    I guess itd be easier to calculate the position by just finding the sin(angle) / side ratios

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому

      Does that account for the projection of the image on a flat frame?

    • @yigitgenc1734
      @yigitgenc1734 4 роки тому

      @@ClaytonDarwin if we know the 2 angles of the triangle and the distance between 2 cameras we can find the 2 other sides by the sinus theorem

    • @ClaytonDarwin
      @ClaytonDarwin  4 роки тому +1

      Yes, that's what we're doing in the end. The problem is getting an accurate angular measurement from a flat projection, which is the first part.

    • @yigitgenc1734
      @yigitgenc1734 4 роки тому

      @@ClaytonDarwin Oh alright. I guess i missed that part. Really nice video btw

  • @mantownmedia78
    @mantownmedia78 5 років тому

    Can you measure the dimensions of a face?

    • @ClaytonDarwin
      @ClaytonDarwin  5 років тому

      Yes, but you would have to adapt the algorithm to that specific task.

  • @Dsenpai-xm2zx
    @Dsenpai-xm2zx 3 роки тому

    Hi i remembered i commenting and asked bunch of questions in this video years ago. Now i want to thank Mr. Clayton for sharing this video, this video is actually saved my final project. I actually already graduated and currently building a company. Thanks to you once again. You helped my future

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      Richardo, thanks for sharing this great news. Best of luck on a great career. 👍

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      Also, what type of company are you building? Just interested.

    • @Dsenpai-xm2zx
      @Dsenpai-xm2zx 3 роки тому

      @@ClaytonDarwin seriouslu you helped alot Thanks!

    • @Dsenpai-xm2zx
      @Dsenpai-xm2zx 3 роки тому

      @@ClaytonDarwin E-Commerce and Agriculture both based in Indonesia

    • @ClaytonDarwin
      @ClaytonDarwin  3 роки тому

      👍