Mind blowing tutorial! Look forward for more similar projects, especially the one you talked about (counter for several exercises at once). Liked + subscribed + bell turned on!
I decided to try and use mediapipe for a project I was doing but got discouraged by some of the documentation as I struggled to even install it. I then came to this tutorial and you made it look so easy and understandable! Thanks.
Thank you so much. Your teaching and tutorial style is the perfect match for me. Clear an precise, and you continuously explain everything you do, thanx!!
Many thanks for this tutorial ! The video is well done, your explanations are very clear, and the code ran directly on my computer without any modifications, which is quite rare !
I've saved this vid for later, dude you are just amazing, i'm loving all the vids and the way you teach is really easy to understand, keep it up! cheers from brazil :)
and once again super thanks for doing this. You make it super easy for a beginner to understand stuffs. This was your second tutorial i tried (1st one being Object detection - 5hr course).
Thank you dude! You are the most underrated youtuber I have ever seen I'm loving this video although I have one question How do you do the exact same thing But instead of a camera, You do it with screen recording live? Hope you or anyone else can help me Cuz I'm really stuck
Could access a screen using pyautogui and use that as the video instead of the feed from opencv. Check this out www.thepythoncode.com/article/make-screen-recorder-python
@@NicholasRenotte Basically it's a sort of personal trainer but more focused on calisthenics workout types. It checks the quality of your movement and re-schedules the workout program based on your performance.
hey bro could you send me your github, id love to see how you did it, Im having trouble implementing the first block of code after I import everything and Im not sure what I am doing wrong
I’m interested in how/what you did. The reason I’m watching this is track my daughter’s softball fast pitch. The initial idea was to identify what pattern she uses to get the strike. I thought I could monitor her movement, read the speed of the ball, and determine if it was in the strike zone. Man, was I kidding myself. lol there is a lot to each of those tasks. Too much to accomplish all at once.
Hello Nicholas, Great work doing all of these; it was enlightening. I wanted to know if you have done the multi-pose/ multi gym tracking you spoke of around 52:11. Thank you.
@@NicholasRenotte Thank you so much! Is there a way to estimate if specific key points in a video fed into the MediaPipe are missing? Say a video of me with just my head and shoulders, and I want to calculate the key point of my ankle. Is this possible?
As a beginner coder, I found this tutorial helpful and very easy to understand. There is only one part which I can not seem to grasp. I have gotten the code correct and even tried copy and pasting the one in the GitHub, but I am unable to get the angle to display on my screen. My assumption is that I have gotten the webcam dimensions incorrect (which you put as 640, 480), and I am not sure how to find out what the correct number should be.
I am studying data science here in mumbai, India. I've got to make a capstone project and this video inspired me to build a home workout app, still working on it. Thank you so much mate for all these videos, you are a HERO. One quick question : should i use movenet lightning or mediapipe pose shown in this video?
@Nicolas Thanks it is such great help. I have 2 quick questions >>>> why have you multiplied in angle code np.multiply(elbow, [640, 480]).astype(int) >>>> how to resize the video frame if you can suggest some learning sources on same. It will be wonderful. Big thanks again man
Heya @Karamvir, 1. This is because we receive normalized landmarks we need to return them to our frame size to render them based on our baseline image 2. Can resize it by adding cv2.resize before the render :)!
Hi Nicholas! First of all thanks for amazing tutorial. It was really helpful. I love your tutorials so much you are explaining the import details in very understanding way. But I want to ask you a question. I did not really fully understand the angle calculation part and I could not find anything about it on the internet either. Could you please give a little bit more info about that part?
@@NicholasRenotte This is very good, I just finished with it😁😁 I am infinitely happy😁😁 Now I can get the coordinates of the points. It is very cool. How many things you can think of.
I love it very much! Is it possible to calculate how accurate our poses are in order to make sure that our poses are correct comparing with gym trainer ? Thanks!!
Dude you are awesome. !!! Did the multi joint estimation video out, if its there cant find it out !!! I wish to make a yoga pose detection... Thanks in advance
thank you so much information learned and talking about the tutorial display with angle only ,why not displaying time beside the top corner of the output
Hi, I really appreciate the effort you put in to break everything down and make it understandable. I want to ask a questions; what does 'image.flags.writeable = False' do exactly? I understand that it improves performance but how does it do this? Many thanks :)
thanks a lot nick, i succeed follow the whole tutorial, and also add some more function lol just kidding haha i only manage to adding the other hands to working, so now i get two pairs of strong hands lol thanks a lot nick for ur help :)
Thanks for your great tutorials. Is there any tutorial, that shows how we can customize/modify or retrain the Mediapipe models? Current models are trained for adult pose detection and have issues in pose detection in infants. How can I improve it?
YES YES YES!!! The video I more than needed. So helpful! Thank you!!! If i wanted to use this method to calculate work done (WD= Force x Distance), how would you suggest I go about it. My thinking is that I'd assign a certain weight to either the joints or the line between joints and use that for force (signifying weight distribution in the body) then the model tracks the distance each joint or line moves throught space and multiplies to get work done. The individual work done per joint or line is summed and I'd have total work done by the body. Practicalizing this is what I can't seem to figure out.
I actually looked into this a while ago, it'd probably be something along the lines of angular force. Take an assumption of the mass of the arm, then calculate it's accleration (speed between frames) then assuming you know th length of the arm or use a proxy e.g. coordinate to coordinate euclidian distance...you should be able to do it!
Amazing video Nicholas I was really amazed by your video ai gym tracker. What are all the workouts does this system could recognize or it measures using the position of coordinates.
Can you explain more about the formula arctan in this video. Are there any sources for me to learn about that . it took me about 2 hours to finish this projects but I do not really understand how the formula works. Thanks!
Check this out: www.mathopenref.com/arctan.html#:~:text=The%20arctan%20function%20is%20the%20inverse%20of%20the%20tangent%20function.&text=Means%3A%20The%20angle%20whose%20tangent,to%20know%20the%20actual%20angle.
"Make Detections" and the parts after that doesn't work. I run the codes but nothing happens. Do you have any idea about that? Could you please help me? Thank you so much.
First of all this channel is amazing, I just found it! Second of all, I have a question about the angle calculation. How's that it's independent from the camera angle? You're using projected 2d points to calculate an angle that actually exists in a 3d space, so the camera angle can change the end result a lot. Am I missing something? Anyway thanks for the great content
Nope, you're correct, I simplified it by calculating it as though it were in a 2d space. You could refine the calc by adding depth to the angle formula as well though!
hey nicholas.. how can we create a 3d map of surrounding using opencv as it can serve applications like calculating distance between joints and many other for developing an AI trainer
Great video! Why use angles for the curl tracker tho? It would be easier to use the position of the wrist and elbow. If the wrist’s position is higher than the elbow’s position that counts as a curl.
Could definitely do that @Paul, wanted to showcase an alternate technique. Could also just ML to predict top of the curl and the bottom like in my gym video.
Hi, thanks for the great video. I was wondering if you know if its possible to only show certain landmark points? I am trying to implement a side of squat analyser and have succesfully only joined up landmarks for the side on show, but the landmark dots are still visible for the other side of the body. I also do not need all of the face points and for tidiness sake I'd like to not show them. Thanks!
Heya @Harry, you can I don't believe you can do it with the native viz code however. I've got some code samples I can shoot you that show how to vizualise it manually, I'm going to be demoing it this week with MoveNet
Thanks mate 💯... where can I see the documentation of mediapipe.. Like other functions and syntax of the functions and how to access data of different data types
This is the official documentation @Jasdeep but just a heads up, it's not super clear about accessing different components: google.github.io/mediapipe/
Your all tutorials are so amazing and productive. In this lesson, I want to just display the angle in a whole number, not a integer. How can we do this. By the way thankyou for creating a nice tutorial
When i run the code i do not get any UI like the rep counter,joints indicator ,the angle indicator etc. I am just getting the live video feed.Can someone please help
Thanks for the amazing video... I'm just facing one problem, the image in the webcam appears to be a mirror image. I tired flipping the image but, the landmarks do not. It does not affect the arm but the landmarks stay at same place without flipping, so whenever I'm lifting my left arm my right hands landmarks are lifting away.
Are you able to make new connections between keypoints, ie draw a line between the eye or mouth key point to the shoulder key point like you do in the MoveNet video?
Awesome tutorial, very profound. Please I was trying to see how to interchange the angle measurement, rather than decreasing the angle from 180 degree to 0 Degree, I actually wanted to do from 0 Degree to 180 degree ( Goniometry calculation of angle joint ) so by the time it staging up I want the angle to increase from 0 Degree towards 180 degree and while trying to stage down then the angle should decrease from 180 degree to 0 Degree. Please I need help as regards this. Have tried to tweak the angle calculation functions u defined yet I am not able to get it right. Kindly help
How do you handle for situations where there are no detections, seems like mediapipe is a bit weird when the visibility drops, or part of your body is covering another landmark. Say for the situation with the arm landmarks, what is the output (result, at specified landmark) when the camera can not see the arm, because that will influence the calculated angle.
Heya @1980legend, with the pose model I believe you still receive a coordinate however the visibility value drops down significantly. I was testing this out today, you're right though some of the other models in the holistic solution don't handle it as gracefully (e.g. the hands model). I normally wrap it in a try: except: block now and just pass onto the next frame if no detections are found!
@@NicholasRenotte thanks man, I’m loving the content. The reason I asked is because I was using the angle calc for pose recognition but it’s a bit glitchy when orientation is not perfect, or I’m guessing when it loses visibility. So when one arm is blocking the other I would get an output that would suggest a specific pose was happening when it wasn’t, so I wasn’t able to really distinguish poses. I should just experiment with what the outputs are when visibility is low, and then factor those specific angles out. Might be a bit problematic if the angles are close to the ones that I need for the poses.
@@1980legend agreed, this technique is great for simple examples but I actually have a better (more robust) way to do the rep counting that also works for more complex poses as well. The code is loosely based around this ua-cam.com/video/We1uB79Ci-w/v-deo.html but will have additional logic for the counter based on pose probability.
First a fall great experience of learning from you. My question is why didn't you use 'z' axis for calculating angle? as in case of predicting body pose it may affect the accuracy of angle.
@@NicholasRenotte I just altered the code and made a squats counter😁. I think for steps, you'd have to pick 2 points you'd call 'midstride' and 'full-stride' then estimate the angles. Use the same up and down logic: hip-knee-ankle > 160 degrees is a stride, hip-knee-ankle < 70 degrees is a midstride. Multiply by 2 at the end becuase only one leg is being tracked. I hope i haven't missed the mark entirely😅
Great Video Nicholas! Thank you very much. I have just one question- I am having a problem installing mediapipe. I used "pip install mediapipe" but it didn't recognize the library. Would appreciate the help. It's kinda urgent.😄
Hi! Great video. I have a question, can you do the same about calculating angles but with the video of holistic with hands and face detection? What things should I change to make it work? Thanks!
Sir, your presentation is really systematic and content is as always informative. Thank you so much.
Thank you so much @Winston, glad you enjoyed it!
this channel is such a gem tbh
Thanks so much @Sikander encosa, pumped you're enjoying it!
Was just thinking yesterday how lucky we are that Nicholas Renotte is doing this. Hopefully rewarding for him, too.
Mind blowing tutorial! Look forward for more similar projects, especially the one you talked about (counter for several exercises at once). Liked + subscribed + bell turned on!
When will that video come out
I decided to try and use mediapipe for a project I was doing but got discouraged by some of the documentation as I struggled to even install it. I then came to this tutorial and you made it look so easy and understandable! Thanks.
Did u get any os error trying to install media pipe?if so how did u fix it?
@@forgewhelbon1131 I didn't get any errors sorry. Good luck in sorting!
@MJ720 oh ok :( thnx tho
my boy, you earned a true fan!, greetings from Chile, on our quest to the future, you have guided us to shores!
Thank you so much. Your teaching and tutorial style is the perfect match for me. Clear an precise, and you continuously explain everything you do, thanx!!
very good video all info from the start and no complicated installations
Thanks so much @Ishan, tried to keep it as smooth as possible!
Many thanks for this tutorial ! The video is well done, your explanations are very clear, and the code ran directly on my computer without any modifications, which is quite rare !
This is the best video I saw to use & test pose estimate. Thank you.
I've saved this vid for later, dude you are just amazing, i'm loving all the vids and the way you teach is really easy to understand, keep it up!
cheers from brazil :)
Thanks so much @zanniboni! Much love!
I needed to calculate the angle of my arm to my body and your video helped me a lot. Thank you very much. I hope you release more videos
More content creators like you should exist
I am learning python and its been week but i can totally follow you. Great work.
THANK YOU SO MUCH. YOU SAVED MY SENIOR DESIGN PROJECT
Ayyyyy, awesome work my guy!
and once again super thanks for doing this. You make it super easy for a beginner to understand stuffs. This was your second tutorial i tried (1st one being Object detection - 5hr course).
Please make more MediaPipe hands tracking projects, we love this content thank you so much
Oh you know it! Plenty more coming @Bank Crawpack Channel!
@@NicholasRenotte thank you so much
Thank you dude!
You are the most underrated youtuber I have ever seen
I'm loving this video although I have one question
How do you do the exact same thing
But instead of a camera,
You do it with screen recording live?
Hope you or anyone else can help me
Cuz I'm really stuck
Could access a screen using pyautogui and use that as the video instead of the feed from opencv. Check this out www.thepythoncode.com/article/make-screen-recorder-python
@@NicholasRenotte thanks!
Excellent step-by-step video! Great work! So much help for my capstone!
Man you're the greatest, this is just what I was looking for.
Really powerful library of python and the teacher too.
What a video Nich. I love data science and computer vision contents. Thank you
Yeah......this is great.....was waiting for this project
Awesome!! Stay tuned, another sweet one coming on Sunday!
Cool project Nick you are such a great coder
Thank you so much @Sangita, much appreciated!
Great video, I'm happy that i learnt something via implementing with proper guidance.
I copied your code, but the angle is not displayed on the screen. I don't know what the problem is.
if you found a solution please tell me
Did you find a solution? I'm dealing with the same problem
@@kaviyak7308 did you find a solution?
I will keep an eye on this since it will somehow become my master thesis! :)
Awesome stuff @Gabbosaur! What's your thesis on?
@@NicholasRenotte Basically it's a sort of personal trainer but more focused on calisthenics workout types. It checks the quality of your movement and re-schedules the workout program based on your performance.
@@Gabbosauro nice! Sounds awesome man!
here for the youtube algorithms. I will definitely watch it later tho. :)
Thanks so much @Mike, let me know what you think!
mythmon spotted :3
I just tweaked this to track my basketball shooting motion in just an hour!!! Thank you for the awesome tutorial 👍🏻👍🏻👍🏻
hey bro could you send me your github, id love to see how you did it, Im having trouble implementing the first block of code after I import everything and Im not sure what I am doing wrong
I’m interested in how/what you did. The reason I’m watching this is track my daughter’s softball fast pitch.
The initial idea was to identify what pattern she uses to get the strike. I thought I could monitor her movement, read the speed of the ball, and determine if it was in the strike zone.
Man, was I kidding myself. lol there is a lot to each of those tasks. Too much to accomplish all at once.
Hello Nicholas,
Great work doing all of these; it was enlightening. I wanted to know if you have done the multi-pose/ multi gym tracking you spoke of around 52:11.
Thank you.
There's a vid on the channel somewhere, I just did the estimation not the pose detection!
@@NicholasRenotte Thank you so much! Is there a way to estimate if specific key points in a video fed into the MediaPipe are missing? Say a video of me with just my head and shoulders, and I want to calculate the key point of my ankle. Is this possible?
Completed the tutorial and learned a looot!! Thank you very much :)
Super cool and awesome Nick!.
Thanks so much @Henk!
Thanks. This is amazing. Looking forward to the multi-pose tracker tutorial.
After MediaPipe Please do cover YOLO v5 /v4 ! I love your courses. They are really awesome! :)
Namaste sir
😂😂😂❤️❤️
Ayyyeee, you got it!
Bro, you are an angel for me. I have the same FYP Body Posture detection correct count in my Gym App. You save me. Thanks, Bro.
Neat, I’m doing the same as well haha!
Love his videos he is such a great teacher!
@Muhammad Mehmaam did you open sources your code by any chance?
It depends on my colleagues, but I will try to upload in github.
@@zindamayat Thank you so much
Were you able to implement other workout movements?
Thanks a lot for this excellent step by step explanations 👌
Awesome!!! Very interesting with lots of applications!
Agreed! Tons of stuff you can do with it!
As a beginner coder, I found this tutorial helpful and very easy to understand. There is only one part which I can not seem to grasp. I have gotten the code correct and even tried copy and pasting the one in the GitHub, but I am unable to get the angle to display on my screen. My assumption is that I have gotten the webcam dimensions incorrect (which you put as 640, 480), and I am not sure how to find out what the correct number should be.
Did you find the solution for it ??
I am studying data science here in mumbai, India. I've got to make a capstone project and this video inspired me to build a home workout app, still working on it. Thank you so much mate for all these videos, you are a HERO. One quick question : should i use movenet lightning or mediapipe pose shown in this video?
what do you want
go with mediapipe
Well explained. From Start to End.
omg love your Video thanks and like the way you code it is so clear
Thankyou so much for the tutorial including the notebook man!
Great tutorial, you are such a great teacher.
@Nicolas Thanks it is such great help. I have 2 quick questions
>>>> why have you multiplied in angle code np.multiply(elbow, [640, 480]).astype(int)
>>>> how to resize the video frame if you can suggest some learning sources on same. It will be wonderful. Big thanks again man
Heya @Karamvir,
1. This is because we receive normalized landmarks we need to return them to our frame size to render them based on our baseline image
2. Can resize it by adding cv2.resize before the render :)!
@@NicholasRenotte can you give the landmark for all body part such as knee , shoulder to render in our frame
Love your work it’s always on point and it teaches a lot for me as a beginner
Thanks so much @Mxolisi, super glad you enjoyed it!
Thank you so much Nicholas for this video, learned a lot from it.
Super Project. Thanks for creating this @Nicolas
Very very helpful. Glad I found it. Thank you ❤.
Hi Nicholas! First of all thanks for amazing tutorial. It was really helpful. I love your tutorials so much you are explaining the import details in very understanding way. But I want to ask you a question. I did not really fully understand the angle calculation part and I could not find anything about it on the internet either. Could you please give a little bit more info about that part?
Definitely, take a look at this: manivannan-ai.medium.com/find-the-angle-between-three-points-from-2d-using-python-348c513e2cd
Do you have a video where we can tell if a person is doing exercise in a wrong way ?
I mean if he is not using correct postures while doing excercise
So much new information... Nick, I want to sleep at night😂😂😂😂
😂 wait til you see what's coming this weekend 😉
@@NicholasRenotte Oh my God😱 Scary to imagine. I'm looking forward to it.😁😁
@@sorochinsky yesss! Think I just wrapped up the code for it this morning!
@@NicholasRenotte This is very good, I just finished with it😁😁
I am infinitely happy😁😁 Now I can get the coordinates of the points. It is very cool. How many things you can think of.
@@sorochinsky yesss! Awesome work. I've got soooo many ideas planned for it!
I love it very much! Is it possible to calculate how accurate our poses are in order to make sure that our poses are correct comparing with gym trainer ? Thanks!!
Definitely, I'm working on a video on that rn @Tsang Wing Ho! Stay tuned!
Dude you are awesome. !!! Did the multi joint estimation video out, if its there cant find it out !!! I wish to make a yoga pose detection... Thanks in advance
great job and made very easily code so any one can learn quickly
it helps me a lot.
cheers from Indonesia!
Great content, loving them
thank you so much information learned and talking about the tutorial display with angle only ,why not displaying time beside the top corner of the output
Really amazing tutorial. Keep it up. It really helped me a lot. Thank you 🙂
Hello?
Did you get the result?
This is amazing!!!! Do more videos of this pleaseee
Amazing job, man!
Keep it up!)
Hi, I really appreciate the effort you put in to break everything down and make it understandable. I want to ask a questions; what does 'image.flags.writeable = False' do exactly? I understand that it improves performance but how does it do this?
Many thanks :)
Interested in knowing also
thanks a lot nick, i succeed follow the whole tutorial, and also add some more function lol just kidding haha i only manage to adding the other hands to working, so now i get two pairs of strong hands lol thanks a lot nick for ur help :)
Is it possible to add code for velocity between each movement of the dumbbell curl? thanks!
excellent video. i learned a lot of things about ML in this video Thank You so much.
Thanks for your great tutorials. Is there any tutorial, that shows how we can customize/modify or retrain the Mediapipe models?
Current models are trained for adult pose detection and have issues in pose detection in infants.
How can I improve it?
Man thank you so much for the wonderful work!
THIS TUTORIAL REALLY WORKS I AM FROM PHILIPP
YES YES YES!!! The video I more than needed. So helpful! Thank you!!!
If i wanted to use this method to calculate work done (WD= Force x Distance), how would you suggest I go about it. My thinking is that I'd assign a certain weight to either the joints or the line between joints and use that for force (signifying weight distribution in the body) then the model tracks the distance each joint or line moves throught space and multiplies to get work done. The individual work done per joint or line is summed and I'd have total work done by the body. Practicalizing this is what I can't seem to figure out.
I actually looked into this a while ago, it'd probably be something along the lines of angular force. Take an assumption of the mass of the arm, then calculate it's accleration (speed between frames) then assuming you know th length of the arm or use a proxy e.g. coordinate to coordinate euclidian distance...you should be able to do it!
You want to use that work done found as an estimate to Joules of energy used?
Any advice on how someone would use this method to track treadmill steps? Awesome video, BTW, very engaging!
Loved the video! Thanks for sharing bro :)
Thanks a ton @Andre 🙏
hi , thanks for your video
i would ask to the learning source of calculating joint angels , it would help me alot thanks in advance :)
Mannnn, I can't remember where i got it from. It was from somewhere on the TF site.
Amazing video Nicholas I was really amazed by your video ai gym tracker. What are all the workouts does this system could recognize or it measures using the position of coordinates.
Hi,
Can we use a depth camera and put real depth value (z value ) instead of its estimation as (result.pose_landmark.z)???
MY MAN YOU ARE THE REAL ONE!!!
Excellent! Thanks Nicholas!
Can you explain more about the formula arctan in this video. Are there any sources for me to learn about that . it took me about 2 hours to finish this projects but I do not really understand how the formula works. Thanks!
Check this out: www.mathopenref.com/arctan.html#:~:text=The%20arctan%20function%20is%20the%20inverse%20of%20the%20tangent%20function.&text=Means%3A%20The%20angle%20whose%20tangent,to%20know%20the%20actual%20angle.
awesome! quick question: how can I compare two different forms, one from a workout video and one from a person trying out their exercise?
Hi , Thank you . Can you record a tutorial with the Hands model as well, and explain how to detect the left or right hand and points ?
Eran
👀 First part of this will be out tomorrow, left and right detection should be done on Sun!
"Make Detections" and the parts after that doesn't work. I run the codes but nothing happens. Do you have any idea about that? Could you please help me? Thank you so much.
Big salute for you work 🤩🤩
great, can you combine mediapipe with yolo to detect multiple objects in 1 camera, i'm looking forward to it!
First of all this channel is amazing, I just found it!
Second of all, I have a question about the angle calculation. How's that it's independent from the camera angle? You're using projected 2d points to calculate an angle that actually exists in a 3d space, so the camera angle can change the end result a lot. Am I missing something?
Anyway thanks for the great content
Nope, you're correct, I simplified it by calculating it as though it were in a 2d space. You could refine the calc by adding depth to the angle formula as well though!
@@NicholasRenotte thanks for answering, I'll have that in mind for my project!
@@NicholasRenotte Can you provide an example of how to consider depth as well in the angle calculation?
@@harshpatil6168 I think you'll have to use another camera or use a library that also has 3D predictions even with one camera
@@harshpatil6168 Using kinect camera can solve this, it can detect depth
this was a fun video cheers nick:)
Nice tutorial!. Great!. Do you know if you can use this for multiple dector pose for people at the same time? How can i get it?
Heya @TechTales SC, this model doesn't support multiple people. Check out OpenPose for it
hey nicholas.. how can we create a 3d map of surrounding using opencv as it can serve applications like calculating distance between joints and many other for developing an AI trainer
Heya @Jasdeep, take a look at SLAM using Lidar for that type of use case :)
Great video! Why use angles for the curl tracker tho? It would be easier to use the position of the wrist and elbow. If the wrist’s position is higher than the elbow’s position that counts as a curl.
Could definitely do that @Paul, wanted to showcase an alternate technique. Could also just ML to predict top of the curl and the bottom like in my gym video.
@@NicholasRenotte thanks for clarifying
Hi, thanks for the great video. I was wondering if you know if its possible to only show certain landmark points? I am trying to implement a side of squat analyser and have succesfully only joined up landmarks for the side on show, but the landmark dots are still visible for the other side of the body. I also do not need all of the face points and for tidiness sake I'd like to not show them. Thanks!
Heya @Harry, you can I don't believe you can do it with the native viz code however. I've got some code samples I can shoot you that show how to vizualise it manually, I'm going to be demoing it this week with MoveNet
Were you able to implement other workout movements?
Thanks mate 💯... where can I see the documentation of mediapipe.. Like other functions and syntax of the functions and how to access data of different data types
This is the official documentation @Jasdeep but just a heads up, it's not super clear about accessing different components: google.github.io/mediapipe/
thanks man... do let us know if u find any other detailed documentation.
@@jasdeep482 definitely, will do.
Your all tutorials are so amazing and productive. In this lesson, I want to just display the angle in a whole number, not a integer. How can we do this. By the way thankyou for creating a nice tutorial
When i run the code i do not get any UI like the rep counter,joints indicator ,the angle indicator etc. I am just getting the live video feed.Can someone please help
Thanks for the amazing video... I'm just facing one problem, the image in the webcam appears to be a mirror image. I tired flipping the image but, the landmarks do not. It does not affect the arm but the landmarks stay at same place without flipping, so whenever I'm lifting my left arm my right hands landmarks are lifting away.
Are you able to make new connections between keypoints, ie draw a line between the eye or mouth key point to the shoulder key point like you do in the MoveNet video?
Awesome tutorial, very profound. Please I was trying to see how to interchange the angle measurement, rather than decreasing the angle from 180 degree to 0 Degree, I actually wanted to do from 0 Degree to 180 degree ( Goniometry calculation of angle joint ) so by the time it staging up I want the angle to increase from 0 Degree towards 180 degree and while trying to stage down then the angle should decrease from 180 degree to 0 Degree.
Please I need help as regards this. Have tried to tweak the angle calculation functions u defined yet I am not able to get it right. Kindly help
Great tutorial...how do I add more exercises ?? Is there any video about it?
Not yet, working on it my guy!
@@NicholasRenotte thanks
How do you handle for situations where there are no detections, seems like mediapipe is a bit weird when the visibility drops, or part of your body is covering another landmark. Say for the situation with the arm landmarks, what is the output (result, at specified landmark) when the camera can not see the arm, because that will influence the calculated angle.
Heya @1980legend, with the pose model I believe you still receive a coordinate however the visibility value drops down significantly. I was testing this out today, you're right though some of the other models in the holistic solution don't handle it as gracefully (e.g. the hands model). I normally wrap it in a try: except: block now and just pass onto the next frame if no detections are found!
@@NicholasRenotte thanks man, I’m loving the content. The reason I asked is because I was using the angle calc for pose recognition but it’s a bit glitchy when orientation is not perfect, or I’m guessing when it loses visibility. So when one arm is blocking the other I would get an output that would suggest a specific pose was happening when it wasn’t, so I wasn’t able to really distinguish poses. I should just experiment with what the outputs are when visibility is low, and then factor those specific angles out. Might be a bit problematic if the angles are close to the ones that I need for the poses.
I suppose I could use the visibility as a condition?
@@1980legend agreed, this technique is great for simple examples but I actually have a better (more robust) way to do the rep counting that also works for more complex poses as well. The code is loosely based around this ua-cam.com/video/We1uB79Ci-w/v-deo.html but will have additional logic for the counter based on pose probability.
@@NicholasRenotte yeah I’m halfway through that video.
First a fall great experience of learning from you. My question is why didn't you use 'z' axis for calculating angle? as in case of predicting body pose it may affect the accuracy of angle.
Don't really need it in this case @Aryendra as we're just calculating in a two dimensional space. You could try applying however!
Bad results with z. You should choose a good position to get correct calculation
Really Amazing. Can we apply the same logic for legs, to count the number of steps walked? Can you do a video how to count the number of steps walked.
Sure could!
@@NicholasRenotte I just altered the code and made a squats counter😁. I think for steps, you'd have to pick 2 points you'd call 'midstride' and 'full-stride' then estimate the angles. Use the same up and down logic: hip-knee-ankle > 160 degrees is a stride, hip-knee-ankle < 70 degrees is a midstride. Multiply by 2 at the end becuase only one leg is being tracked.
I hope i haven't missed the mark entirely😅
@@Uncle_Buchi Thank you for replying ❤️. I will try the logic and update you with the results ☺️
@@Bharath_Kumar234 sure thing! Best of luck
@@Uncle_Buchi Can you share the code that you made changes? I also wanted to do a squat rep counter but I failed.
Once we calculate the distance between two joints, how we can convert it into centimetres or Meters i.e in real time measurements?
Great Video Nicholas! Thank you very much. I have just one question- I am having a problem installing mediapipe. I used "pip install mediapipe" but it didn't recognize the library. Would appreciate the help. It's kinda urgent.😄
Shoot, might be your Python version, can you try 3.7.3 instead
Hi! Great video. I have a question, can you do the same about calculating angles but with the video of holistic with hands and face detection? What things should I change to make it work? Thanks!
Just need to change the keypoints you pick up! Planning a bigger video on rep counting using ML as well!
Thank you Sir!!!
For such an awesome content🤠
Thank you so much @Vaishnavi!