This tutorial made me more excited to work on my thesis! Great work! Thank you!!! Btw, for those looking for the exact code, I think this is it: import cv2 as cv import matplotlib.pyplot as plt net = cv.dnn.readNetFromTensorflow("graph_opt.pb") ## weights inWidth = 368 inHeight = 368 thr = 0.2 BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4, "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9, "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14, "LEye": 15, "REar": 16, "LEar": 17, "Background": 18 } POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"], ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"], ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"], ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"], ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ] img = cv.imread("pose.png") plt.imshow(cv.cvtColor(img, cv.COLOR_BGR2RGB)) def pose_estimation(frame): frameWidth = frame.shape[1] frameHeight = frame.shape[0] net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False)) out = net.forward() out = out[:, :19, :, :]
assert(len(BODY_PARTS) == out.shape[1])
points = []
for i in range(len(BODY_PARTS)): # Slice heatmap of corresponding body's part. heatMap = out[0, i, :, :] # Originally, we try to find all the local maximums. To simplify a sample # we just find a global one. However only a single pose at the same time # could be detected this way. _, conf, _, point = cv.minMaxLoc(heatMap) x = (frameWidth * point[0]) / out.shape[3] y = (frameHeight * point[1]) / out.shape[2] # Add a point if it's confidence is higher than threshold. points.append((int(x), int(y)) if conf > thr else None) for pair in POSE_PAIRS: partFrom = pair[0] partTo = pair[1] assert(partFrom in BODY_PARTS) assert(partTo in BODY_PARTS) idFrom = BODY_PARTS[partFrom] idTo = BODY_PARTS[partTo] if points[idFrom] and points[idTo]: cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3) cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
t, _ = net.getPerfProfile() freq = cv.getTickFrequency() / 1000 cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0)) return frame estimated_image = pose_estimation(img) plt.imshow(cv.cvtColor(estimated_image, cv.COLOR_BGR2RGB)) . . # perform demo on video... . . # perform this demo on webcam cap = cv.VideoCapture(1) cap.set(cv.CAP_PROP_FPS, 10) cap.set(3, 800) cap.set(4, 800) if not cap.isOpened(): cap = cv.VideoCapture(0) if not cap.isOpened(): raise IOError("Cannot open webcam")
while cv.waitKey(1) < 0: hasFrame, frame = cap.read() if not hasFrame: cv.waitKey() break
for i in range(len(BODY_PARTS)): # Slice heatmap of corresponding body's part. heatMap = out[0, i, :, :] # Originally, we try to find all the local maximums. To simplify a sample # we just find a global one. However only a single pose at the same time # could be detected this way. _, conf, _, point = cv.minMaxLoc(heatMap) x = (frameWidth * point[0]) / out.shape[3] y = (frameHeight * point[1]) / out.shape[2] # Add a point if it's confidence is higher than threshold. points.append((int(x), int(y)) if conf > thr else None) for pair in POSE_PAIRS: partFrom = pair[0] partTo = pair[1] assert(partFrom in BODY_PARTS) assert(partTo in BODY_PARTS) idFrom = BODY_PARTS[partFrom] idTo = BODY_PARTS[partTo] if points[idFrom] and points[idTo]: cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3) cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
What if i want to use this code on videos not photos? If i have some videos “dataset” on folder and i want this code to work on videos? How can i do this??
I was highly impressed by your video. I am also a PhD student in Universiti Sains Malaysia, working on Human pose estimation.I hope we can share our views in the future
That was really an informative video but sir can you please show us one with the trained model. It would be really helpful if you plz come up with human pose detection with trained models. With love from Indian Fan:)
Hey, I'm trying to build an exercise recognition and Rep counter system using Pose estimation, How can I get started with this? Any kind of help will be appreciated! :)
Hello Do you have a video where we can tell if a person is doing exercise in a wrong way ? I mean if he is not using correct postures while doing excercise
net = cv.dnn.readNetFromTensorflow("graph_opt.pb") cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\caffe\caffe_io.cpp:1138: error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open "graph_opt.pb" in function 'cv::dnn::ReadProtoFromBinaryFile' i get this error, how to fix?
Thank you for your interest, and sorry for being late in replying, I have noted your suggestions and will definitely come up with customized model building in near future, Till then , please keep supporting me like this, Stay blessed.
Tnx. Helped alot. I wish you would have mentioned the output file. Where we would get the coordinations of body points. I will be thankful if you could help me out in that part too.
Sorry for being late in replying, please follow the video, when we are performing predictions and drawing points and joining them , so actually it contains coordinates that is how it is drawing the points and line on image, but all cordinates are based on image size,you have to subtract or check total image size accrodingly. I hope I have answered your question, please print the values you will get like x,y,w,h we are alreay using in video, please follow the instructions. Stay blessed
What if i want to use this code on videos not pics? What should i do?? “ i have some videos on folder i want to extract body points and convert them to csv .. how should i do this ??
thanks for the video! one question though - why are all the points shifted a bit to the left, and to the top on both the image you used, and the quanhua92/human-pose-estimation-opencv github repo that it takes as example?
Hi, I’m trying to combined deep learning yolov5 model with pose estimation, when I feed in video, the computer took much longer to process each frame due to the pose estimation, if I use webcam instead will it affect its overall performance
Thank you for your interest, glad you liked it. The openpose project is under development, so keep following, it will be more robust in future, Stay blessed.
you helped me a lot brother, thanks very much.. can you tell me what's your specification for computing this code? because i ran in processor i3 6006u, vga nvidia 920mx, and ram 8 gb. the computing goes very slow and the result was not like yours
Thanks for your interest, my PC specs are core i7, with GPU 2060Ti, and 32GB RAM , you can later use coalab, but I am not sure about the stability of openpose on coalab. Thanks for supporting my channel
How can I display if one specific key point is up, for example the one on the arm? I want a Programm that says left arm up but I can’t find anything about how to do that
Thank you so much for your interest, There are several methods, one basic is to compute angles of both arms using the points, based on angle values you can easily find which arm is up . Similar technique is used in Yoga analysis apps which estimate poses and warn you about your posture of exercise. Openpose, provide you all points of skeleton, all you need is to use computer vision technique to find angles (lines of hough tranform ) and can easily find the angles. I may upload video like this , but cannot commit at this moment, Search for finding the angles I hope I have answered your question , keep supporting our channel.
Dear Sir, the tutorial is very impressive, but when I tried to run the code in my system, the line : estimated_image= pose_estimation(img) worked but the plot is not showing the lines and landmarks. What should I do, several times I checked with the code Do I need to add or change any code, I tried with changing the colors of lines and points. As it is not working for the image itself, I cannot go further with videos and webcam. Please help me
Thanks man! great job. I have a question, hope you can answer: If I want to print the pose I am doing, like am lifting my right arm, the program would output "right arm up", how can I do that?
lol, I can't get pass this error, what version of opencv did you use when this video was recorded? OpenCV(4.5.3) C:\Users unneradmin\AppData\Local\Temp\pip-req-build-czu11tvl\opencv\modules\dnn\src\caffe\caffe_io.cpp:1133: error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open "graph_opt.pb" in function 'cv::dnn::ReadProtoFromBinaryFile' File "M:\GITHUB\Test\OpenPoseTest.py", line 5, in net = cv.dnn.readNetFromTensorflow("graph_opt.pb")
can i use this topic for my masters thesis? seems to be really interesting but since its for begineers level would it be okay if i work on this for my master research?
for sure you can follow this topic and many researchers are working on it as I have already mentioned research topics and website in the videos, you can follow them and extend your idea for example 1) for yoga app you can design your app, one all pose of human are right, you can let the user know that everything is fine if a user pose is not according to the requirement, you can give him alarm to correct his/her pose 2) Exercise forms 3) basketball poses 4) any other action pose of any sports , you can analyse and make your own app or project Keep supporting my channel
Hey, I'm doing a research for my project in which I want to do openpose recognition in basketball (picture or/and videos). I would like to train the model with videos of basketball shoots/ lay ups, or pictures, to make the recognition even better, but I have no idea how to do that and combine it with what you've shown in the video. I would like to talk about it through e-mail or some contact if you are willing to help me.
Thank you for you interest. Although this video was intended for beginners level, incase you want to start pursuing research in this area. First thing, please have a proper literature review by reading research papers to get intuition of the topic (link already provided in the description). Regarding your problem, you need to train model for basketball shoots, the best solution as a short-cut is to perform "Transfer Learning". In our channel , we do have videos regarding transfer learning e.g face mask detection. you need to modify last few layers correspondingly and can train basketbal shoots image incase you have dataset. you can modify tensorflow model etc.
This tutorial made me more excited to work on my thesis! Great work! Thank you!!!
Btw, for those looking for the exact code, I think this is it:
import cv2 as cv
import matplotlib.pyplot as plt
net = cv.dnn.readNetFromTensorflow("graph_opt.pb") ## weights
inWidth = 368
inHeight = 368
thr = 0.2
BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
"LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
"RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
"LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }
POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]
img = cv.imread("pose.png")
plt.imshow(cv.cvtColor(img, cv.COLOR_BGR2RGB))
def pose_estimation(frame):
frameWidth = frame.shape[1]
frameHeight = frame.shape[0]
net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
out = net.forward()
out = out[:, :19, :, :]
assert(len(BODY_PARTS) == out.shape[1])
points = []
for i in range(len(BODY_PARTS)):
# Slice heatmap of corresponding body's part.
heatMap = out[0, i, :, :]
# Originally, we try to find all the local maximums. To simplify a sample
# we just find a global one. However only a single pose at the same time
# could be detected this way.
_, conf, _, point = cv.minMaxLoc(heatMap)
x = (frameWidth * point[0]) / out.shape[3]
y = (frameHeight * point[1]) / out.shape[2]
# Add a point if it's confidence is higher than threshold.
points.append((int(x), int(y)) if conf > thr else None)
for pair in POSE_PAIRS:
partFrom = pair[0]
partTo = pair[1]
assert(partFrom in BODY_PARTS)
assert(partTo in BODY_PARTS)
idFrom = BODY_PARTS[partFrom]
idTo = BODY_PARTS[partTo]
if points[idFrom] and points[idTo]:
cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
t, _ = net.getPerfProfile()
freq = cv.getTickFrequency() / 1000
cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))
return frame
estimated_image = pose_estimation(img)
plt.imshow(cv.cvtColor(estimated_image, cv.COLOR_BGR2RGB))
.
.
# perform demo on video...
.
.
# perform this demo on webcam
cap = cv.VideoCapture(1)
cap.set(cv.CAP_PROP_FPS, 10)
cap.set(3, 800)
cap.set(4, 800)
if not cap.isOpened():
cap = cv.VideoCapture(0)
if not cap.isOpened():
raise IOError("Cannot open webcam")
while cv.waitKey(1) < 0:
hasFrame, frame = cap.read()
if not hasFrame:
cv.waitKey()
break
frameWidth = frame.shape[1]
frameHeight = frame.shape[0]
net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
out = net.forward()
out = out[:, :19, :, :]
assert(len(BODY_PARTS) == out.shape[1])
points = []
for i in range(len(BODY_PARTS)):
# Slice heatmap of corresponding body's part.
heatMap = out[0, i, :, :]
# Originally, we try to find all the local maximums. To simplify a sample
# we just find a global one. However only a single pose at the same time
# could be detected this way.
_, conf, _, point = cv.minMaxLoc(heatMap)
x = (frameWidth * point[0]) / out.shape[3]
y = (frameHeight * point[1]) / out.shape[2]
# Add a point if it's confidence is higher than threshold.
points.append((int(x), int(y)) if conf > thr else None)
for pair in POSE_PAIRS:
partFrom = pair[0]
partTo = pair[1]
assert(partFrom in BODY_PARTS)
assert(partTo in BODY_PARTS)
idFrom = BODY_PARTS[partFrom]
idTo = BODY_PARTS[partTo]
if points[idFrom] and points[idTo]:
cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
t, _ = net.getPerfProfile()
freq = cv.getTickFrequency() / 1000
cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))
cv.imshow('Pose estimation trial', frame)
Do you finished your thesis?
What if i want to use this code on videos not photos? If i have some videos “dataset” on folder and i want this code to work on videos? How can i do this??
I was highly impressed by your video. I am also a PhD student in Universiti Sains Malaysia, working on Human pose estimation.I hope we can share our views in the future
Hi!
I need help about it. Are you available?
Very much Impressed by the content of this video, detailed one so far.
It was very impressive.
How do I extract coordinate values for each joint?
That was really an informative video but sir can you please show us one with the trained model. It would be really helpful if you plz come up with human pose detection with trained models. With love from Indian Fan:)
just sayn bros indian lmao
Hey, I'm trying to build an exercise recognition and Rep counter system using Pose estimation, How can I get started with this? Any kind of help will be appreciated! :)
Hello
Do you have a video where we can tell if a person is doing exercise in a wrong way ?
I mean if he is not using correct postures while doing excercise
net = cv.dnn.readNetFromTensorflow("graph_opt.pb")
cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\caffe\caffe_io.cpp:1138: error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open "graph_opt.pb" in function 'cv::dnn::ReadProtoFromBinaryFile'
i get this error, how to fix?
your videos are very helpful. Please upload more
thanks for this video.....how to do pose estimation for multiple people in a video....this is actually detecting the pose for one person.
How to train the pose estimation model for my custom use case?
Any link of step by step would be helpful for me!!
Thanks
Thank you for your interest, and sorry for being late in replying, I have noted your suggestions and will definitely come up with customized model building in near future, Till then , please keep supporting me like this, Stay blessed.
Excellent Video!!!
You helped me bro thank you
Great tutorial for beginners, Thank You sir
Tnx. Helped alot. I wish you would have mentioned the output file. Where we would get the coordinations of body points. I will be thankful if you could help me out in that part too.
Sorry for being late in replying, please follow the video, when we are performing predictions and drawing points and joining them , so actually it contains coordinates that is how it is drawing the points and line on image, but all cordinates are based on image size,you have to subtract or check total image size accrodingly. I hope I have answered your question, please print the values you will get like x,y,w,h we are alreay using in video, please follow the instructions. Stay blessed
What if i want to use this code on videos not pics? What should i do?? “ i have some videos on folder i want to extract body points and convert them to csv .. how should i do this ??
Hello , what should I add on the script , so I have hands recognition too ?
thanks for the video! one question though - why are all the points shifted a bit to the left, and to the top on both the image you used, and the quanhua92/human-pose-estimation-opencv github repo that it takes as example?
Hi, I’m trying to combined deep learning yolov5 model with pose estimation, when I feed in video, the computer took much longer to process each frame due to the pose estimation, if I use webcam instead will it affect its overall performance
Great video.How to find the distance between one keypoint(hip) to other keypoint(shoulder)
estimated_image = pose_estimation(img)
this line is not working please guide
Really helpful video even for beginners, thank you so much! though it is not very accurate when i tested it with different images.
Thank you for your interest, glad you liked it. The openpose project is under development, so keep following, it will be more robust in future, Stay blessed.
@@deeplearning_by_phdscholar6925 thank you so much for replying. Subscribed!
I am wondering....do you think I could add additional dots in the body besides hips, knees, etc?
can we also able to find angle between each point?
you helped me a lot brother, thanks very much..
can you tell me what's your specification for computing this code? because i ran in
processor i3 6006u, vga nvidia 920mx, and ram 8 gb. the computing goes very slow and the result was not like yours
Thanks for your interest, my PC specs are core i7, with GPU 2060Ti, and 32GB RAM , you can later use coalab, but I am not sure about the stability of openpose on coalab. Thanks for supporting my channel
Thanks , Your video is very useful, does this use openpose?
where can i get video datasets for pushups, squats, lunges (at least 5 exercises)
Hi, why am I getting this error?
AttributeError: module 'cv2' has no attribute 'readNetFromTensorFlow'
I'm using Pycharm....
How can I display if one specific key point is up, for example the one on the arm? I want a Programm that says left arm up but I can’t find anything about how to do that
Thank you so much for your interest, There are several methods, one basic is to compute angles of both arms using the points, based on angle values you can easily find which arm is up . Similar technique is used in Yoga analysis apps which estimate poses and warn you about your posture of exercise.
Openpose, provide you all points of skeleton, all you need is to use computer vision technique to find angles (lines of hough tranform ) and can easily find the angles. I may upload video like this , but cannot commit at this moment,
Search for finding the angles
I hope I have answered your question , keep supporting our channel.
Enough informative
Very good job thanks you. Possible multiple human ?
Hello,
how to detect the position in the video like sitting or standing..? Please help me to solve this..
Hello the video was awesome, could you please guide us from scatrch what r the platform to use and what to install... plzzzzz plzzz
Dear Sir, the tutorial is very impressive, but when I tried to run the code in my system, the line : estimated_image= pose_estimation(img) worked but the plot is not showing the lines and landmarks. What should I do, several times I checked with the code
Do I need to add or change any code, I tried with changing the colors of lines and points. As it is not working for the image itself, I cannot go further with videos and webcam. Please help me
Thanks man! great job.
I have a question, hope you can answer: If I want to print the pose I am doing, like am lifting my right arm, the program would output "right arm up", how can I do that?
did you get the solution?
Yes, but not using this method
@@mathieulegentil5657 can you share which method did you use? I'm beginner and i'm learning to do a school project like yours. Thanks!
Hi bro very nice. I just want to built a Android app to detect same way body motion. Can you tell me what i need to do for it. Very thank full
How estimate 3D pose from 2D pose estimation
Can we use such a code for pose detection of animals ?
Thanks for the informative video.
im getting an error when net=cv.dnn.readNetFlo... (right on second line)...cant move from there...
lol, I can't get pass this error, what version of opencv did you use when this video was recorded?
OpenCV(4.5.3) C:\Users
unneradmin\AppData\Local\Temp\pip-req-build-czu11tvl\opencv\modules\dnn\src\caffe\caffe_io.cpp:1133: error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open "graph_opt.pb" in function 'cv::dnn::ReadProtoFromBinaryFile'
File "M:\GITHUB\Test\OpenPoseTest.py", line 5, in
net = cv.dnn.readNetFromTensorflow("graph_opt.pb")
how to apply the output animation to unity or blender for example ???
if we cant do that, then what is the point of it??
How can we find the x & Y coordinates of joint and angle between hand, hip etc.
very good work . How can i contact you? I really need your help regarding my final year project regarding this topic please help me out. Reply please
I'm getting errors on the first line of code. Am I missing something that I need to setup first?
is there any way we can make custom keypoint object detection?
I’m into it from DOng-A university.Can I switch to your lab?
Thank you very much. It ran well, do you know anywhere I can find information on one that can do action recognition?
I really appreciate your informative work. Could you address how to calculate the joint angles from this work?
Yes sir please
after running the code the video is not starting showing pose estimation
how we get heatmap .can you explain elaborately with code.
Where can we see the output file of the video to download
can i use this topic for my masters thesis? seems to be really interesting but since its for begineers level would it be okay if i work on this for my master research?
for sure you can follow this topic and many researchers are working on it as I have already mentioned research topics and website in the videos, you can follow them and extend your idea
for example
1) for yoga app
you can design your app, one all pose of human are right, you can let the user know that everything is fine
if a user pose is not according to the requirement, you can give him alarm to correct his/her pose
2) Exercise forms
3) basketball poses
4) any other action pose of any sports , you can analyse and make your own app or project
Keep supporting my channel
@@deeplearning_by_phdscholar6925 Can you share the code for the yoga application. Thank you for the video
Can you run this on a mobile app. Please make a video if possible.
Great video,helped a lot but faced issue while saving the output file.
ITS NOT WORKING WITH MY WEBCAM JUST GIVING A PREVIOUS IMAGE'S ESTIMATION PLEASE HELP ME
how to generate skeletons for multiple persons in an image.
Thank you ❤
does this code work for multiperson detection?
Does it?
@@dhouaflighazi3680 not for me
Sir i want your source code, the github code makes output slower,please help me out
Is it possible to recognize two person??
can i estimate 3d positions instead of 2d?
Short answer is No. Long answer is maybe, because OpenPose allows 3D pose estimation using several expensive cameras.
Raise id error cannot open webcame sir pls rectify the error
Can yu tell us about algorithm
can you please make one more video to convert this pose estimation to .bvh file , please thanks in advance
Getting Assertion Error after line 9
Hello sir , I need the base paper of this
Hey, I'm doing a research for my project in which I want to do openpose recognition in basketball (picture or/and videos). I would like to train the model with videos of basketball shoots/ lay ups, or pictures, to make the recognition even better, but I have no idea how to do that and combine it with what you've shown in the video.
I would like to talk about it through e-mail or some contact if you are willing to help me.
Thank you for you interest. Although this video was intended for beginners level, incase you want to start pursuing research in this area. First thing, please have a proper literature review by reading research papers to get intuition of the topic (link already provided in the description).
Regarding your problem, you need to train model for basketball shoots, the best solution as a short-cut is to perform "Transfer Learning". In our channel , we do have videos regarding transfer learning e.g face mask detection.
you need to modify last few layers correspondingly and can train basketbal shoots image incase you have dataset. you can modify tensorflow model etc.
@@deeplearning_by_phdscholar6925 thank you
need help in project of making a fight detection
Hi please help with multiple people pose detection code
where do i download cv?
How can I contact u.... Sir plz reply
Great tutorial 😇
Question:-
1) How do I save only stick figure instead of subject+sticks ?
2) How do I save only stick figure video ?
Please help me
Just open anacond and write "sebject install stick diagram #subjects"
you used DNN!!?
Bro pls doo the fall detection plssssss
Hello everyone, maybe someone knows how to get keypoint output in JSON, XML, or YML formats?
Can you please provide your code
Share your source code
Thanks for your interest. Already been shared. Check the description of video
@@deeplearning_by_phdscholar6925 Thank you sir
@@chjayakrishnajk hey that github code is not same as shown in Video, will you guide me please
@@deeplearning_by_phdscholar6925 Thanks! Sorry for the late reply
@@chjayakrishnajk I felt that github code is not same as shown in Video, will you guide me plz? Please share your method :)