Some great explanations of complex academic machine vision concepts. Structured lighting would be a very interesting topic (stripe lines for distance estimation).
Thank you very much! Yes structured light is very interesting and used in many practical applications. I might make some videos about the theory but might not be able to have practical demonstrations
i saw that video camera calibration from that u get matrix of camera and distortion my question where we use those values of distoriton and other value
We use the distortion parameters to undistort the image. In this video we calibrate the camera so we get better accuracy for the estimation and then the camera matrix to have the relation between the camera and world with the solvepnp method
Hey, I wish to determine the pose of an object in a given photo. I have a bounding box around that object(with the help of object detection technique), and I am taking 5 points as control points-the four vertices of the corners and centre of the bounding box. I am confused as to what I should take the world co-ordinate frame? Can you please help me out?
Hi thank you very much! That's a file where the camera parameters and distortion parameters are stored after camera calibration. I have videos going over camera calibration aswell
How do I do the opposite of this, finding my camera's pose from an image (or multiple images)? Since solvepnp find the rotation and translation from the world system to the camera system, do I just transpose the rotation and invert the point?
Hi Nicolai. Thanks for these videos. I have a doubt. I was proving the pose estimation code with boxes option and It haven't any problem but when use draw function and axis variable it shows an error in method cv.line in draw function. something like that "cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'line'" Do you know if is any problem in parameters declaration or something else? Thanks beforehand
haven't fully understood; would it be fair to say that the algorithm that finds a most probable 'fit' that is the pose (composed of rvec and the translation), is using some sort of iterative method to reach its conclusion?
thank you for the great explanation. I have a small problem that I cant find an explanation for anywhere and I would really appreciate it if you put me on the right track. How can I find the location of a 3d point in the World frame which origin is the checker board origin in mm If I have the 3D location of the point in mm in the camera frame and the tvec and the rvec of that particular view of the checker board? I just want to know if I can find a way to use the tvec and the rvec to transform a point from the camera frame to the world frame. And If I can't, what should I do in my case to obtain the 3d coordinates of a point in the world frame in mm. I'm using a kinect v2 which offers a depth sensor If that can help in any way.
Thank you very much. U should be able to do it when u are using a Kinect. U should be able to project the 2d image coordinates out to the 3d world with the camera matrix the t and r vecs. In one of my videos i show the pinhole model and how 3d and 2d points are related to each other
Thanks for watching! Yeah u can do pose estimation with a moving camera as well but u will just need a way to get the 3d points for that object and match that to the 2D image points
@@NicolaiAI Thanks! Would something as cv.triangulatePoints work? The parameters are: projMatr1:Any, projMatr2:Any, projPoints1:Any, projPoints2:Any, points4D: Any=None). I´m not sure as I can extract two sets of points from different images but the camera matrix would be just one...
Hi, I went through this tutorial and used the calibration code (with the chess board images to return the dist_co and camera_matrix. However for my head pose estimation project this seemed to give me inaccurate results compared to using standard camera_matrix and dist_co parameters for cv's solve PNP method, do you know why this may be?
How do u know the standard camera matrix and distortion parameters if u haven't calibrated the camera? It sounds a bit weird that the results are more inaccurate after doing calibration
@@NicolaiAI my comment keep getting deleted but I used standard parameters seen in other tutorials, would the issue be that I have used standard images and not any from my own webcam camera?
@@irishrepublican756 yes u would need to do the calibration on images from ur own camera since every camera is different. Even cameras of the same model sometimes. You can't use others parameters for urs :)
@@NicolaiAI To do this , I calibrate my camera and I got the Camera Intrinsic Parameters. I need to find extrinsic parameters but I couldn't found anything about finding extrinsic parameters. Do you have any video about that ?
@@sametnalbant9316 yes i have videos here on my channel about camera calibration where i talk more about extrinsic parameters. If u want to know more in details u would have to go through some of the first videos i have in my computer vision playlist. In that one i go into the theory before practical examples
Thanks for this video! How do I estimate the height and width of a real world object with the obtained intrinsic, extrinsic, and distortion parameters??
There isnt really a correct way to find the best resolution. U might need a pretty high resolution camera to be able to see that far and if u want high precision aswell
hei nice video and great explanation ... I would like to ask you if you know a working implementation of the LINE2D algorithm (or similar) to do the detection and pose estimation of a texture-less object, using only a 2D grey image and template matching with 3D model ( no Deep learning or similar); I am freaking out to find something that works as I am new to computer vision :)
u can check this out here, its a good explanation: cs.stackexchange.com/questions/41327/what-is-the-difference-between-the-fundamental-matrix-and-the-essential-matrix
Teacher, first congratulations for the channel. I need your help, I have a simple image with a range of 5 placements where the quantity is identified, how do I identify the biggest color type red and say where the X,Y is, and if you can help me, and only for create an alert.
Great video! I'm trying to use a camera and a reference object to estimate the position of the camera. The reference object is a square of known x * x size. I also know the X,Y,Z distance of that square from my reference world coordinates (0,0,0). I know the extrinsic and intristic parameters and dist coefficients of the camera and I can find the X,Y coordinates of each of the 4 square's corners in the picture. Is it possible to find the X,Y of the camera in reference to the position of that square using this method? Thanks in advance
Great video! I'm working on a project where I need to estimate the height of a camera so essentially just returning the Z coordinate from tvec. But I'm not sure how to specified the Objs Points parameter in the cv2.solvepnp function.. Can you please explain how to determine object points? I understand it is the real world coordinates of the actual object but I'm still unsure on how to find it.. Thanks!
In the case of camera calibration for example we get the object points from the corners of the chessboard. U could use arbitrary objects but then u will need to find the points in the image urself and relate them to the image frame. We use a chessboard since the sizes of the squares are the same so it's pretty simple
@@NicolaiAI Thanks for your timely response! In my case, I'm using a QR code, I am able to locate the corners of the QR code in the image and I pass these coordinates into cv2.solvepnp as ImagePoints, so to determine the Object Points do I just need to pass in the same coordinates but in a 3D space? (i.e. z=0)
Hello again please tell me how to fix this? File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/lib/npyio.py", line 259, in __getitem__ raise KeyError("%s is not a file in the archive" % key) KeyError: 'CameraParams.npz is not a file in the archive'
Join My AI Career Program
👉 www.nicolai-nielsen.com/aicareer
Enroll in My School and Technical Courses
👉 www.nicos-school.com
Some great explanations of complex academic machine vision concepts. Structured lighting would be a very interesting topic (stripe lines for distance estimation).
Thank you very much! Yes structured light is very interesting and used in many practical applications. I might make some videos about the theory but might not be able to have practical demonstrations
I have already clicked RGB images, 3Dmodel.ply & camera parameters. Do you know of a tool to get R & T matrices for each image?
Exactly what I was looking for. Thank you!
Thanks for watching! Glad i could help
i saw that video camera calibration from that u get matrix of camera and distortion my question where we use those values
of distoriton and other value
We use the distortion parameters to undistort the image. In this video we calibrate the camera so we get better accuracy for the estimation and then the camera matrix to have the relation between the camera and world with the solvepnp method
@@NicolaiAI yeah thankq your videos are good man
@@sarathkumar-gq8be thank you very much! I appreciate that
exactly what I was looking for! thank you for the great explanation :)
Thanks a lot for watching! Glad it was helpful
Hey, I wish to determine the pose of an object in a given photo. I have a bounding box around that object(with the help of object detection technique), and I am taking 5 points as control points-the four vertices of the corners and centre of the bounding box. I am confused as to what I should take the world co-ordinate frame? Can you please help me out?
Hello great tutorial could you please tell me how do i get the npz?
cameraparams.npz
Hi thank you very much! That's a file where the camera parameters and distortion parameters are stored after camera calibration. I have videos going over camera calibration aswell
How do I do the opposite of this, finding my camera's pose from an image (or multiple images)? Since solvepnp find the rotation and translation from the world system to the camera system, do I just transpose the rotation and invert the point?
Please do the c++ version
Sure I'll look into that!
Why does the axis and draw function throw a type in 0 indexed part
Hi Nicolai. Thanks for these videos.
I have a doubt. I was proving the pose estimation code with boxes option and It haven't any problem but when use draw function and axis variable it shows an error in method cv.line in draw function. something like that "cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'line'"
Do you know if is any problem in parameters declaration or something else? Thanks beforehand
Thanks a lot for what! Make sure that u pass the correct data structures to the line function from opencv
Hi Sir, i have met the same problem and would like to ask, have u solved the problem and with which method?
haven't fully understood; would it be fair to say that the algorithm that finds a most probable 'fit' that is the pose (composed of rvec and the translation), is using some sort of iterative method to reach its conclusion?
Yeah it's based on least squares approximation or you can set up the calibration problem as a nonlinear problem which would be iterative.
Thank you! Very interesting
Thanks for watching!
thank you for the great explanation. I have a small problem that I cant find an explanation for anywhere and I would really appreciate it if you put me on the right track.
How can I find the location of a 3d point in the World frame which origin is the checker board origin in mm If I have the 3D location of the point in mm in the camera frame and the tvec and the rvec of that particular view of the checker board?
I just want to know if I can find a way to use the tvec and the rvec to transform a point from the camera frame to the world frame.
And If I can't, what should I do in my case to obtain the 3d coordinates of a point in the world frame in mm.
I'm using a kinect v2 which offers a depth sensor If that can help in any way.
Thank you very much. U should be able to do it when u are using a Kinect. U should be able to project the 2d image coordinates out to the 3d world with the camera matrix the t and r vecs. In one of my videos i show the pinhole model and how 3d and 2d points are related to each other
NIce content.
I would be grateful if you could provide the sources in order to grace our further development.👍
Thanks for the videos! Does this work if I´m using a moving camera? (A cellphone that will track something that´s moving)
Thanks for watching! Yeah u can do pose estimation with a moving camera as well but u will just need a way to get the 3d points for that object and match that to the 2D image points
@@NicolaiAI Thanks! Would something as cv.triangulatePoints work? The parameters are: projMatr1:Any, projMatr2:Any, projPoints1:Any, projPoints2:Any, points4D: Any=None). I´m not sure as I can extract two sets of points from different images but the camera matrix would be just one...
@@valeriaospital1410 yes
This video fit me like a glove, thanks mate!
Thanks for watching! Glad that it helped u
The checkerboard field size = 10mm/1cm ?
I want to find the measurement of any object If I place on the chess board.How can I do that?I want height,width and depth of the object
Hi, I went through this tutorial and used the calibration code (with the chess board images to return the dist_co and camera_matrix. However for my head pose estimation project this seemed to give me inaccurate results compared to using standard camera_matrix and dist_co parameters for cv's solve PNP method, do you know why this may be?
How do u know the standard camera matrix and distortion parameters if u haven't calibrated the camera? It sounds a bit weird that the results are more inaccurate after doing calibration
@@NicolaiAI my comment keep getting deleted but I used standard parameters seen in other tutorials, would the issue be that I have used standard images and not any from my own webcam camera?
@@irishrepublican756 yes u would need to do the calibration on images from ur own camera since every camera is different. Even cameras of the same model sometimes. You can't use others parameters for urs :)
@@NicolaiAI okay great, I’ll get a chess board and take pictures of it to use instead then, thanks :)
@@irishrepublican756 if u have a tablet u can calibrate it with a chessboard on that or just print out a chessboard could work too
thank your sir for the video. please can you do one video to exolain about "Motion estimation"??
I already have that on my channel with visual odometry
When I load the Calibration Parameter in, then I get an error : Cannot load file containing pickled data when allow_pickle=False
What Can I do?
Is it possible to get real cordinates of an object by using camera if we knew the real cordinates of the camera and the height of the camera?
Yes, if u know the camera parameters and u have calibrated the cameras then u can do 2d to 3d world reprojections of the image points
@@NicolaiAI To do this , I calibrate my camera and I got the Camera Intrinsic Parameters. I need to find extrinsic parameters but I couldn't found anything about finding extrinsic parameters. Do you have any video about that ?
@@sametnalbant9316 yes i have videos here on my channel about camera calibration where i talk more about extrinsic parameters. If u want to know more in details u would have to go through some of the first videos i have in my computer vision playlist. In that one i go into the theory before practical examples
@@NicolaiAI Thanks for the videos! Is there anywhere I can find how to do the 2d to 3d reprojections?
@@valeriaospital1410 thanks a lot! I'll make videos about triangulation for stereo vision
Thanks for this video! How do I estimate the height and width of a real world object with the obtained intrinsic, extrinsic, and distortion parameters??
You've made great turorials! Can you explain how should we choose the resolution of camera to detect small object at long distance (3m-7m)? Thanks
There isnt really a correct way to find the best resolution. U might need a pretty high resolution camera to be able to see that far and if u want high precision aswell
hei nice video and great explanation ... I would like to ask you if you know a working implementation of the LINE2D algorithm (or similar) to do the detection and pose estimation of a texture-less object, using only a 2D grey image and template matching with 3D model ( no Deep learning or similar); I am freaking out to find something that works as I am new to computer vision :)
Thank you and thanks for watching! I unfortunately don't have an implemention of the line2d algorithm or really worked with anything similar before
Thanks for sharing! But I am kinda confused what is the fundamental matrix mentioned in the documentation?
U can either use the essential og fundamental matrix to describe the relation between two images of the same points
@@NicolaiAI So I guess it is basically the transformation matrix in 2D ? It doesn't actually make sense for me👀
u can check this out here, its a good explanation: cs.stackexchange.com/questions/41327/what-is-the-difference-between-the-fundamental-matrix-and-the-essential-matrix
Teacher, first congratulations for the channel. I need your help, I have a simple image with a range of 5 placements where the quantity is identified, how do I identify the biggest color type red and say where the X,Y is, and if you can help me, and only for create an alert.
Great video! I'm trying to use a camera and a reference object to estimate the position of the camera. The reference object is a square of known x * x size. I also know the X,Y,Z distance of that square from my reference world coordinates (0,0,0). I know the extrinsic and intristic parameters and dist coefficients of the camera and I can find the X,Y coordinates of each of the 4 square's corners in the picture. Is it possible to find the X,Y of the camera in reference to the position of that square using this method?
Thanks in advance
Have you solved it?
sorry, how to define distortion matrix parameter parameter
The distortion parameters are found through camera calibration
Great video! I'm working on a project where I need to estimate the height of a camera so essentially just returning the Z coordinate from tvec. But I'm not sure how to specified the Objs Points parameter in the cv2.solvepnp function.. Can you please explain how to determine object points? I understand it is the real world coordinates of the actual object but I'm still unsure on how to find it.. Thanks!
In the case of camera calibration for example we get the object points from the corners of the chessboard. U could use arbitrary objects but then u will need to find the points in the image urself and relate them to the image frame. We use a chessboard since the sizes of the squares are the same so it's pretty simple
@@NicolaiAI Thanks for your timely response!
In my case, I'm using a QR code, I am able to locate the corners of the QR code in the image and I pass these coordinates into cv2.solvepnp as ImagePoints, so to determine the Object Points do I just need to pass in the same coordinates but in a 3D space? (i.e. z=0)
Oops forgot to mention I calibrated the camera using a chessboard and was able to retrieve the intrinsic parameters, distortions, etc
You could probably do that. What is your goal with doing pnp?
@@NicolaiAI Estimating camera position, essentially I just need to estimate the height at which the camera is placed based on the reference image
Whenever he says "object", I hear "optic", which can be somewhat confusing :P
In real life situation we use letter H to indicate UAV landing destination. But why we are using chess board pattern in this code ?
U could be using what ever object u want, in this video here i just use chessboard since i have the object pounts
Hello again please tell me how to fix this?
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/lib/npyio.py", line 259, in __getitem__
raise KeyError("%s is not a file in the archive" % key)
KeyError: 'CameraParams.npz is not a file in the archive'
U will need to do the camera calibration as i do in the video and then have the file in the same directory as the this file
Can i email you and can you help me then if possible?