if you watch previous tutorials you will understand what is happening behind syntax briefly, first step, movement is difference between two frames second, difference has noises because of details and light on video so gaussian blurring is eliminating the noises, third, obtaining threshold from clean difference fourth, dilating for eliminating district small weak threshold lines which corrupt healthy threshold detection fifth, finding contours from clean threshold sixth, eliminating small contours which can not be a human by filtering contour area seventh, drawing rectangles for each detected contour on the frame, rectangle dimensions obtained from cv2.boundingRect(contour) that is it!
plz explain 6th point. where to look for. my code is still recognizing contour over ropes. ps: i am also getting this error: error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\core\src\arithm.cpp:669: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'
By evaluating the aspect ratio of each rectangle , we can determine whether it is a person or not. if Height/Width >1 then it may be a person elif Height/Width
abi merhaba. Bir projeniz mi vardı? Neden ihtiyacınız oldu acaba bu tutoriallara? Şuanda bir bitirme projem var da, bende bu videoların içersinde kaybolmuş durumdayım.
You can change the thresh value in threshold function which eradicates the noise generated by the rope movement behind. I found a value of thresh=50 which worked fine for this input.
I added edited this line with w>h which makes sure no box with a width greater then its height will be detected (which helps with the rope not being detected): if cv2.contourArea(contour) < 900 or w > h:
Super helpful guide, was looking for an input into my KcF tracker that wasn’t the traditional selectROI so used this and outputted the bounding boxes once they fitted my parameters
After several hours I finally found the correct link to download opencv Edit: never mind ,it was still the wrong link ,it’s now been 3 hours.I’m just gonna waste my money on an actual security camera, even though it’s not necessary Because I can use codes.
I want to change my rectangle frame as im using live webcam and the previous code has (frame,(x,y),(x+w,y+h),(0,225,225),2) What values should i pick tor my live wecam so that full body can be detected
i translated it all to C++ and it is compiling but unfortunately there is no contours drew edit: after debugging i realized that contours are shown only at first frame edit2: okay so after my investigation i found out that in C++ there might be a problem with "=" operator for Mat class and frame1=frame2 isnt working propely. i had to replace it for frame1=frame2.clone()
If you guys want realtime with a webcam use the following code: cap = cv2.VideoCapture(0) cap.set(3, 640) cap.set(4, 480) If you want to use some external camera I think u need to change the number 0 to 1 or another number, I would google it I would also google cap.set so u can make sure tht it works for u
Hi... Thank you for the tutorial.. It's impressive. I have one question. Can we make the animation of moving objects a function of time? the program will run and continuous animation will run??? Thank you
File "C:/Users/tasme/PycharmProjects/pythonProject1/main.py", line 15, in print(frame1.shape) AttributeError: 'NoneType' object has no attribute 'shape' Process finished with exit code 1 im having this problem will u pls help
The approach is very excellent but the algorithm won't work if the people are not moving cuz when they will stop moving the will be no difference between two consecutive frames hence it won't detect the person. CAN U FIX THIS PROBLEM?
Hi, I'm developing a python speed tracking project and your video taught me a lot... Could I copy a part of your script to implement it in my project ? that would be cool...
Hi, drawing contours ( or rectangles ) on frame is nothing but changing the frame itself. if we draw contours on frame2, in the next iteration frame2 will be frame1 and which not the original frame of video, its modified ( contours written frame) .
What if there are several moving objects other than just persons and ropes, let's say cars, birds, etc. How will you track only persons? It could have been awesome if you explained this Yolo and multi-threading for faster and accurate video processing.
The person is far away from the camera, and because of this the number of pixels that person takes up is below the area threshold set in the code. You see as that person comes into the foreground, they occupy more pixels and they start getting detected
umm , ig you could declare an int variable and then puttext that number on the coordinates on each rectangle and then when the loop is over , you could puttext at the top of screen :" j moving objects detected ", and then you could store that j somewhere (idk much about that ) . But nice idea , that way we can find the no of objects
I ran your code with a new video. In the video, along with the people, camera is also moving and the camera is just placed behind a possession. In that case, I am getting multiple rectangles for the same moving objects instead of one. Is there a way to solve this issue?
After line 15, I get the following error: "ValueError: too many values to unpack (expected 2). The same as in the previous video where this technique is also used. Who can help me / point me in the right direction (yes, I already used Google).
I am getting this error: Traceback (most recent call last): File "c:\Users\orange\Bureau\Learnings\openCv\basic_motion_detection.py", line 19, in diff = cv.absdiff(frame1, frame2) cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'
Hi, As we are working on frames, so we can find number of people equals to number of contours in each frame by adjusting contours area size ( one contour for one person)
Hi Its wonderful video, can u make the camera record when the motion start, I mean record every motion, when camera detect someone moving start to record the video, it will be greate if u teach is how to do it.
Not sure if this is helpful. But "Blue Iris" software is capable of doing what you are asking. I have a pc running 6 cameras with it and works very well
By taking the difference between the two frames, all intensity for parts of the video where things don't move will be reduced to zero (since their values are the same in both frames). The parts of the video where things DO move, however, will have different values in the two frames, and so the difference will result in a non-zero value. The resulting image -diff- will be low-intensity (0) wherever the two frames are the same(no movement) and it will have some intensity for those parts that are different (movement). In this way, diff only preserves the movement of the original video.
I assume you have already figured it out but If your just testing you don’t really need to download a video you can simply go to ex shutterstock/video and get the url to video ( using f11 or whatever button that gets you into the delvoper tab in your browser ) then just write that in cv.videoCapture(“the url”)
once your frame2 is assigned to frame1, now both frame1 and frame 2 will have same value, which will make 'diff' variable 0. Hence frame2 is set to new value using cap.read()
if you watch previous tutorials you will understand what is happening behind syntax
briefly,
first step, movement is difference between two frames
second, difference has noises because of details and light on video so gaussian blurring is eliminating the noises,
third, obtaining threshold from clean difference
fourth, dilating for eliminating district small weak threshold lines which corrupt healthy threshold detection
fifth, finding contours from clean threshold
sixth, eliminating small contours which can not be a human by filtering contour area
seventh, drawing rectangles for each detected contour on the frame, rectangle dimensions obtained from cv2.boundingRect(contour)
that is it!
Cool, nice quick explanation, thanks!
Thanks for the simple explanation !
Great explanation
can you send the previous videos link
plz explain 6th point. where to look for. my code is still recognizing contour over ropes.
ps: i am also getting this error:
error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\core\src\arithm.cpp:669: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'
By evaluating the aspect ratio of each rectangle , we can determine whether it is a person or not.
if Height/Width >1 then it may be a person
elif Height/Width
Thanks :D
You gave me hope dude.You are the man!Thank you so much for helping us about the topics that we can't easily learn from anybody.
abi merhaba. Bir projeniz mi vardı? Neden ihtiyacınız oldu acaba bu tutoriallara? Şuanda bir bitirme projem var da, bende bu videoların içersinde kaybolmuş durumdayım.
You can change the thresh value in threshold function which eradicates the noise generated by the rope movement behind. I found a value of thresh=50 which worked fine for this input.
Top tip: replace 'vtest.avi' with '0' to use your native webcam!
yes but please without those ' ' , so just write cap = cv2.VideoCapture(0)
I added edited this line with w>h which makes sure no box with a width greater then its height will be detected (which helps with the rope not being detected):
if cv2.contourArea(contour) < 900 or w > h:
You my friend are a legend
Super helpful guide, was looking for an input into my KcF tracker that wasn’t the traditional selectROI so used this and outputted the bounding boxes once they fitted my parameters
I've loved the video already before I even watch it a second:)
Pls upload tutorial on creating own haar cascade classifier
Thanks for making these videos so easy to follow.
Very good explanation. Congratulations. Do you have a video that counts these people after detection? Thanks for the video.
After several hours I finally found the correct link to download opencv
Edit: never mind ,it was still the wrong link ,it’s now been 3 hours.I’m just gonna waste my money on an actual security camera, even though it’s not necessary
Because I can use codes.
Thanks for that very helpful Video.
which camera and components u have used for implementing as a hardware
That was amazing! Thank you so much
thanks dude appreciate it
Awesome tutorial sir , althought harr cascade would be better
yhank you god bless you. you are the best teacher. you helpt me a lot
Great Video!!
Is there a way to use the concept in the video to count the number of people in the test-video??
Can you provide the video which you have used in your code. Also can you please explain why have you used frame1
Thank you !
Thank you so much bro your really helped me, you made my day
Hi! How to download/get the video mentioned in this video
@@justanothergoogleuser
I used my own video
Great video! Thank you for sharing!
I want to change my rectangle frame as im using live webcam and the previous code has (frame,(x,y),(x+w,y+h),(0,225,225),2)
What values should i pick tor my live wecam so that full body can be detected
Really helpful,Thanks
Work gan 👍🏻 ,thanks 🙏🙏
(Auto subscribe)
Interesting. How could we differentiate between actual movement and camera movement?
its worked thanks..
Can you use this to create data points (X,Y) for that movement every second for example?
Which database would you prefer to store the data about a person, trace and time they were present in this video?
i translated it all to C++ and it is compiling but unfortunately there is no contours drew
edit: after debugging i realized that contours are shown only at first frame
edit2: okay so after my investigation i found out that in C++ there might be a problem with "=" operator for Mat class and frame1=frame2 isnt working propely. i had to replace it for frame1=frame2.clone()
how to select the "thresh" argument for cv2.threshold(). In this video it is mentioned as 20. Is there any criteria for arriving at a value?
If you guys want realtime with a webcam use the following code:
cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
If you want to use some external camera I think u need to change the number 0 to 1 or another number, I would google it
I would also google cap.set so u can make sure tht it works for u
That's cool, but I don't know how to add a toolbar to control the video progress, can you teach us how to do it? thanks
Good tutorial
Can I ask if this is real-time or it has some delay? If delay, how much time does it behind from the real-time? Thank you
If you still want to know very minimal delay if any on my code from my web cam don’t notice any
Thank you for the tutorial, I am curious, is it possible to use a stream video as the source video? if yes, how is it can be achieve?
Thanks for this great tutuorial!
Hey ! Where can I get a similar video?
Hello. Can it be applied when we are watching any video online?
Hi. Thanks so much for this tutorial. It helps me a lot. Could you please show us how to take a screenshot of the frame when capture motion?
Did u resolve this question?
Hi...
Thank you for the tutorial.. It's impressive. I have one question.
Can we make the animation of moving objects a function of time? the program will run and continuous animation will run???
Thank you
File "C:/Users/tasme/PycharmProjects/pythonProject1/main.py", line 15, in
print(frame1.shape)
AttributeError: 'NoneType' object has no attribute 'shape'
Process finished with exit code 1
im having this problem will u pls help
Thank you so much for this awesome tutorial!!
@Karaoke By K Thank you!
is it possible to do with real time video streamming? example: from my security cam
can u tell me which algorithm ur using
Can you please let me know what is the unit for the area of rectangle you r using, for example 700…. Is it 700 mm^2 ? How can I get to know?
How can I get the video file that have been given as input
The approach is very excellent but the algorithm won't work if the people are not moving cuz when they will stop moving the will be no difference between two consecutive frames hence it won't detect the person. CAN U FIX THIS PROBLEM?
Thank you, how about consecutive images?
Thanks for this great tutuorial.
This is very inspiring content. Thank you for sharing.
Hi, I'm developing a python speed tracking project and your video taught me a lot... Could I copy a part of your script to implement it in my project ? that would be cool...
Thank you for this video.
Question: Why are you showing frame1 and not frame2? Because frame2 should be closer to realtime picture
Hi,
drawing contours ( or rectangles ) on frame is nothing but changing the frame itself. if we draw contours on frame2, in the next iteration frame2 will be frame1 and which not the original frame of video, its modified ( contours written frame) .
thanks bro........
What if there are several moving objects other than just persons and ropes, let's say cars, birds, etc. How will you track only persons? It could have been awesome if you explained this Yolo and multi-threading for faster and accurate video processing.
Have you found the solution????
@@Satchi017 Nah, after doing this project, I was busy with other tasks.
can we raise an alarm through code if any person comes near the lamppost ?? can u guide on same ??
how to see the accuracy of this code ?? Please help me with this
Why are we assigning frame2 to frame1 while drawing contours and again reading a new frame in frame2??
Can i know what is the difference between motion detection and object detection? Does it have differences?
Super
Hi! Where can I get the sample video shown in this video ?
can i use vlc window panel as a motion capture ?. cuz i want be able to change the video source
Hi! How to download/get the video mentioned in this video?
great
Sir in the very starting of the mp4, why didn't opencv detect the person who was walking along the left side of the window on the narrow footpath?
The person is far away from the camera, and because of this the number of pixels that person takes up is below the area threshold set in the code. You see as that person comes into the foreground, they occupy more pixels and they start getting detected
@@erikanderson6076 yes I realized it later thanks😊
Is it possible to do it in the game?
thx
Good work, but could you put the code and the testing movie.
Really appreciate your vids but please use dark mode. You'll thank me later
Hi, would this work to track multiple markers on a segmented object?
Congratulations on the great job.
I would like to know how to put an ID on these people?
You should look more into face detection and recognition using the Haars Cascade algorithm. This script is for simple motion detection
umm , ig you could declare an int variable and then puttext that number on the coordinates on each rectangle and then when the loop is over , you could puttext at the top of screen :" j moving objects detected ", and then you could store that j somewhere (idk much about that ) . But nice idea , that way we can find the no of objects
how can i know the area limit of small objects(here it was 700 or 900)
Thanks, great tutorial!!! Im trying to capture a window or part of the screen, is it possible to put this through cv2.videoCapture() ?
Vid starts at 2:34
Nice nice
how do you use your webcam
can this comment get a heart ?👀
@ProgrammingKnowledge Thank you very much sir !
I ran your code with a new video. In the video, along with the people, camera is also moving and the camera is just placed behind a possession. In that case, I am getting multiple rectangles for the same moving objects instead of one. Is there a way to solve this issue?
DO YOU HAVE MORE RECORDINGS TO TEST? CAN YOU MAIL THEM TO ME AT pratiksha.manave99@gmail.com?
After line 15, I get the following error: "ValueError: too many values to unpack (expected 2).
The same as in the previous video where this technique is also used.
Who can help me / point me in the right direction (yes, I already used Google).
Can I run it on collab?
When it is taking difference between two frames, shouldn't it be 0 because the same video input is used(captured)?
I do believe it is taking two frames directly one after the other.
How to enable auto suggest opencv on vscode?
i copied the code shown at the start into a google colab notebook and the video isnt showing up, anyone know why?
How can i crop frames rectangle from video?
Helloji radhe radhe
I am getting this error:
Traceback (most recent call last):
File "c:\Users\orange\Bureau\Learnings\openCv\basic_motion_detection.py", line 19, in
diff = cv.absdiff(frame1, frame2)
cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'
Can I get the video link? Could anyone help me with the pedestrians walking video?
how to add a line that comes from the back?
mille merciiiiiiiiiiiii
How to add multiple videos bro
Do I need to use a avi can this work with mp4 i tried it but it doesn't work why is this
how can I find video that you used
Hi, great video!
i have a question, how can i know how much people are in the input video?
you can extract contours.length or so if I am not mistaken
if a countours is a list which should have its length
count++ in the for contour in contours
Hi, As we are working on frames, so we can find number of people equals to number of contours in each frame by adjusting contours area size ( one contour for one person)
Is there a way to do motion tracking with the boxes around the people with live video?
Do you have a idea. If you have please drop a hint
use cap.VideoCapture(0) for live webcam
Hi
Its wonderful video, can u make the camera record when the motion start, I mean record every motion, when camera detect someone moving start to record the video, it will be greate if u teach is how to do it.
Not sure if this is helpful. But "Blue Iris" software is capable of doing what you are asking. I have a pc running 6 cameras with it and works very well
What's the use of taking difference between 2 frames??why it is needed..??
Please reply..
By taking the difference between the two frames, all intensity for parts of the video where things don't move will be reduced to zero (since their values are the same in both frames). The parts of the video where things DO move, however, will have different values in the two frames, and so the difference will result in a non-zero value. The resulting image -diff- will be low-intensity (0) wherever the two frames are the same(no movement) and it will have some intensity for those parts that are different (movement). In this way, diff only preserves the movement of the original video.
@@blz1rorudon does this mean that if there is any vibration on the webcam/camera, the whole data cant be used reliably to detect motion?
How do we decide the threshold value? Isn't it better to use OTSU's Binarization to calculate the threshold value
this error is occurring what i should do now ?
diff = cv2.absdiff(frame1, frame2)
TypeError: Expected Ptr for argument '%s'
I also got error in this line that shape of 2 frames is not same. If u come up with any solution please inform me.
@@nikhilrana8800 if u get the same error then u must changed some thing that not follow tutorial I did the same thing
@@sulmanyousaf6545 What did u change in your code??
@@nikhilrana8800 before the loop end we are getting the frame2=cv2.read() I just placed ret, before frame2 as
ret, frame2= cv2.read()
@@sulmanyousaf6545 thanks for the help
Any idea's on how to download a nice video to run this code on? I'm a nube to CV.
I assume you have already figured it out but If your just testing you don’t really need to download a video you can simply go to ex shutterstock/video and get the url to video ( using f11 or whatever button that gets you into the delvoper tab in your browser ) then just write that in cv.videoCapture(“the url”)
Can someone expalin me the use of line 17 and 18?
frame1= frame2
ret, frame= cap.read()
like what is their use exactly?
I want also brother
once your frame2 is assigned to frame1, now both frame1 and frame 2 will have same value, which will make 'diff' variable 0.
Hence frame2 is set to new value using cap.read()
@@vigneshkathirkamar3113 hi vignesh, what is the importance of the "ret, " part?
Why we have None in dilute function