At 19:30 where you were explaining about FAST detector algorithm. I think that the function 'FastFeatureDetector_create(50)' is not taking 50 points as when image is generated the numbers are quite low. To back this, you can see when you change value from 50 to 20, the number of corner points in new image are far more than the previous case. This means that 50 is not about 50 points. Let me know if I am wrong and what truly create is doing over there?
Hello there, this is very helpful. my code is not using cv2.imshow. is reporting the cv2.imshow is not in use in Colab. when I use display, the image did not show on the window
Sorry, never tried it on videos but I don;t see why it would be any different. You just load a video as (time) series of images so I believe this procedure should work.
How can i match two images using those descriptors? Suppose i want the matching score between two images on scale of 0-1. Can i use those descriptors to do the same?
Hi, is it related to limiting number of features so that every image can have same number of features so that we can use svm or any other traditional ml algos on it ?
Sir why didn't you used this method to convert the image to grayscale "img = cv2.imread("images/grains.jpg",0) " like the way you did in last tutorial rather you used "gray=cv2.cvtColor() "? Is their any difference between the two methods ? And if there is what is that?
No difference between both approaches. No specific reason why I chose one over the other method. Normally I like to import images as color (RGB) and then convert them to grey scale. That way my color image is available in case I need it for some operations. Also, I sometimes apply an image processing function to an RGB image by converting it to HSV space or applying it to each of the R, G, B channels. In summary, import images as greyscale if you know you do not need all channels.
Dear Sreeni Garu, First of all thank you for your time and very professional explanations and tutorials. I struggling to save kp1 and des1 after orb.detectAndCompute I am going to use in another project. (kp1,des1=orb.detectAndCompute(img1,None)) I have seen your next tutorial about cvs save , but I can not do it for this kp and des. Please help when you can , I would appreciate.
Keypoint is the feature or point of interest. Descriptor is a way you uniquely describe the keypoint, which in the case of SIFT it is described by a vector of the orientation histogram of neighboring points.
error: (-5:Bad argument) in function 'drawKeypoints' > Overload resolution failed: > - Can't parse 'keypoints'. Sequence item with index 0 has a wrong type > - Can't parse 'keypoints'. Sequence item with index 0 has a wrong type Sir Iam having error on using drawKeyPoints My Code orb = cv2.ORB_create() kp1 = orb.compute(img1, None) kp2 = orb.compute(img2, None) imgKp1 = cv2.drawKeypoints(img1, kp1, None, flags=0) imgKp2 = cv2.drawKeypoints(img2, kp2, None, flags=0)
The content you presente is so briliant!
Thanks for the structured session on image proccesing 🤘
Thanks!
Thank you
really loved your explanation, thanks.
At 19:30 where you were explaining about FAST detector algorithm. I think that the function 'FastFeatureDetector_create(50)' is not taking 50 points as when image is generated the numbers are quite low. To back this, you can see when you change value from 50 to 20, the number of corner points in new image are far more than the previous case. This means that 50 is not about 50 points.
Let me know if I am wrong and what truly create is doing over there?
50 is the threshold values, not number of points.
I need to know more about the difference between the key points and descriptors 😁
And I was in need to use SIFT
Very nice tutorial! It helps me a lot! Thank you.
Very good tutorial, thank you!
This was so helpful. Thank you very much.
how would you go about figuring if star images have a moving object in it?
Future here. Have you done on SWIF or SURF later after that.
Do you have the LESH descriptor?
it's very insightful. Thanks for your effort
can we use custom matching points? like if we can manually add what all the required features we need to do template matching?
Excellent video! Thank you.
Glad it was helpful!
How can i get orientation, scale and position information of keypoints?
Sreeni Garu , Thank You so much for wonderful session, May I know how to communicate you🙏🙏🙏
Amazing explanation Sir 👌👌👌
Thanks and welcome
Excellent! Please can you do a video on how to use these methods for classification/recognition, like video in 176
thank you , is there any tutorial on gist descriptor?
Frankly, I never heard of it.
how i can "brisk detector" to detect points in npyhon please
Can you do about fragile telomeres?
Amazing tutorial.. thanks a lot
Hello there, this is very helpful. my code is not using cv2.imshow. is reporting the cv2.imshow is not in use in Colab. when I use display, the image did not show on the window
great lectures
Thank you for your explanation.. How to combine CNN features with HOG/SIFT?
Great tutorial! thank you very much
Glad you enjoyed it!
Hello, do you know how to find the key points on video's frames?
Sorry, never tried it on videos but I don;t see why it would be any different. You just load a video as (time) series of images so I believe this procedure should work.
@@DigitalSreeni Yeah i tried It but is not working as in an individual image.
If i find something, i will come up with Informations
How can i match two images using those descriptors? Suppose i want the matching score between two images on scale of 0-1. Can i use those descriptors to do the same?
cv2.FastFeatureDetector_create(50) --> parameter '50' is for threshold, not for number of key points. Otherwise, brilliant series!!!!!
Thanks for correcting me, really appreciate.
Hi, is it related to limiting number of features so that every image can have same number of features so that we can use svm or any other traditional ml algos on it ?
i think license for SIFT has expired in 2020 and its implementation is available in OpenCV main branch
Yes, you are right. I checked it on opencv 4.6.0 and SIFT is available. Thanks for the note.
Sir why didn't you used this method to convert the image to grayscale "img = cv2.imread("images/grains.jpg",0) " like the way you did in last tutorial rather you used "gray=cv2.cvtColor() "?
Is their any difference between the two methods ? And if there is what is that?
No difference between both approaches. No specific reason why I chose one over the other method. Normally I like to import images as color (RGB) and then convert them to grey scale. That way my color image is available in case I need it for some operations. Also, I sometimes apply an image processing function to an RGB image by converting it to HSV space or applying it to each of the R, G, B channels. In summary, import images as greyscale if you know you do not need all channels.
Dear Sreeni Garu,
First of all thank you for your time and very professional explanations and tutorials.
I struggling to save kp1 and des1 after orb.detectAndCompute I am going to use in another project.
(kp1,des1=orb.detectAndCompute(img1,None))
I have seen your next tutorial about cvs save , but I can not do it for this kp and des.
Please help when you can , I would appreciate.
Can somebody simplify "descriptor" and "key point" for me, please? Still struggling to understand the terms
Keypoint is the feature or point of interest. Descriptor is a way you uniquely describe the keypoint, which in the case of SIFT it is described by a vector of the orientation histogram of neighboring points.
error: (-5:Bad argument) in function 'drawKeypoints'
> Overload resolution failed:
> - Can't parse 'keypoints'. Sequence item with index 0 has a wrong type
> - Can't parse 'keypoints'. Sequence item with index 0 has a wrong type
Sir Iam having error on using drawKeyPoints
My Code
orb = cv2.ORB_create()
kp1 = orb.compute(img1, None)
kp2 = orb.compute(img2, None)
imgKp1 = cv2.drawKeypoints(img1, kp1, None, flags=0)
imgKp2 = cv2.drawKeypoints(img2, kp2, None, flags=0)
Not sure of the error. One suggestion is to convert your images and keypoints to integer and see if that helps.
@@DigitalSreeni ok thanks i shall do that
Great video!
thank you
Amazing! Thanks
This is great content. I'm just impatient to go faster. But I cant!!!
The patent of SIFT expired in the year 2020. It is available thru cv2.SIFT_create() function.
thank you
You're welcome
Thank you