Mind blowing tutorial! Look forward for more similar projects, especially the one you talked about (counter for several exercises at once). Liked + subscribed + bell turned on!
I decided to try and use mediapipe for a project I was doing but got discouraged by some of the documentation as I struggled to even install it. I then came to this tutorial and you made it look so easy and understandable! Thanks.
Thank you so much. Your teaching and tutorial style is the perfect match for me. Clear an precise, and you continuously explain everything you do, thanx!!
I've saved this vid for later, dude you are just amazing, i'm loving all the vids and the way you teach is really easy to understand, keep it up! cheers from brazil :)
Many thanks for this tutorial ! The video is well done, your explanations are very clear, and the code ran directly on my computer without any modifications, which is quite rare !
Thank you dude! You are the most underrated youtuber I have ever seen I'm loving this video although I have one question How do you do the exact same thing But instead of a camera, You do it with screen recording live? Hope you or anyone else can help me Cuz I'm really stuck
Could access a screen using pyautogui and use that as the video instead of the feed from opencv. Check this out www.thepythoncode.com/article/make-screen-recorder-python
hey bro could you send me your github, id love to see how you did it, Im having trouble implementing the first block of code after I import everything and Im not sure what I am doing wrong
I’m interested in how/what you did. The reason I’m watching this is track my daughter’s softball fast pitch. The initial idea was to identify what pattern she uses to get the strike. I thought I could monitor her movement, read the speed of the ball, and determine if it was in the strike zone. Man, was I kidding myself. lol there is a lot to each of those tasks. Too much to accomplish all at once.
and once again super thanks for doing this. You make it super easy for a beginner to understand stuffs. This was your second tutorial i tried (1st one being Object detection - 5hr course).
@@NicholasRenotte Basically it's a sort of personal trainer but more focused on calisthenics workout types. It checks the quality of your movement and re-schedules the workout program based on your performance.
thank you so much information learned and talking about the tutorial display with angle only ,why not displaying time beside the top corner of the output
As a beginner coder, I found this tutorial helpful and very easy to understand. There is only one part which I can not seem to grasp. I have gotten the code correct and even tried copy and pasting the one in the GitHub, but I am unable to get the angle to display on my screen. My assumption is that I have gotten the webcam dimensions incorrect (which you put as 640, 480), and I am not sure how to find out what the correct number should be.
Hi Nicholas! First of all thanks for amazing tutorial. It was really helpful. I love your tutorials so much you are explaining the import details in very understanding way. But I want to ask you a question. I did not really fully understand the angle calculation part and I could not find anything about it on the internet either. Could you please give a little bit more info about that part?
Dude you are awesome. !!! Did the multi joint estimation video out, if its there cant find it out !!! I wish to make a yoga pose detection... Thanks in advance
Your all tutorials are so amazing and productive. In this lesson, I want to just display the angle in a whole number, not a integer. How can we do this. By the way thankyou for creating a nice tutorial
@@NicholasRenotte This is very good, I just finished with it😁😁 I am infinitely happy😁😁 Now I can get the coordinates of the points. It is very cool. How many things you can think of.
I am studying data science here in mumbai, India. I've got to make a capstone project and this video inspired me to build a home workout app, still working on it. Thank you so much mate for all these videos, you are a HERO. One quick question : should i use movenet lightning or mediapipe pose shown in this video?
I love it very much! Is it possible to calculate how accurate our poses are in order to make sure that our poses are correct comparing with gym trainer ? Thanks!!
Amazing video Nicholas I was really amazed by your video ai gym tracker. What are all the workouts does this system could recognize or it measures using the position of coordinates.
Hi, I really appreciate the effort you put in to break everything down and make it understandable. I want to ask a questions; what does 'image.flags.writeable = False' do exactly? I understand that it improves performance but how does it do this? Many thanks :)
YES YES YES!!! The video I more than needed. So helpful! Thank you!!! If i wanted to use this method to calculate work done (WD= Force x Distance), how would you suggest I go about it. My thinking is that I'd assign a certain weight to either the joints or the line between joints and use that for force (signifying weight distribution in the body) then the model tracks the distance each joint or line moves throught space and multiplies to get work done. The individual work done per joint or line is summed and I'd have total work done by the body. Practicalizing this is what I can't seem to figure out.
I actually looked into this a while ago, it'd probably be something along the lines of angular force. Take an assumption of the mass of the arm, then calculate it's accleration (speed between frames) then assuming you know th length of the arm or use a proxy e.g. coordinate to coordinate euclidian distance...you should be able to do it!
"Make Detections" and the parts after that doesn't work. I run the codes but nothing happens. Do you have any idea about that? Could you please help me? Thank you so much.
thanks a lot nick, i succeed follow the whole tutorial, and also add some more function lol just kidding haha i only manage to adding the other hands to working, so now i get two pairs of strong hands lol thanks a lot nick for ur help :)
@Nicolas Thanks it is such great help. I have 2 quick questions >>>> why have you multiplied in angle code np.multiply(elbow, [640, 480]).astype(int) >>>> how to resize the video frame if you can suggest some learning sources on same. It will be wonderful. Big thanks again man
Heya @Karamvir, 1. This is because we receive normalized landmarks we need to return them to our frame size to render them based on our baseline image 2. Can resize it by adding cv2.resize before the render :)!
Thanks for your great tutorials. Is there any tutorial, that shows how we can customize/modify or retrain the Mediapipe models? Current models are trained for adult pose detection and have issues in pose detection in infants. How can I improve it?
First of all this channel is amazing, I just found it! Second of all, I have a question about the angle calculation. How's that it's independent from the camera angle? You're using projected 2d points to calculate an angle that actually exists in a 3d space, so the camera angle can change the end result a lot. Am I missing something? Anyway thanks for the great content
Nope, you're correct, I simplified it by calculating it as though it were in a 2d space. You could refine the calc by adding depth to the angle formula as well though!
Great video! Why use angles for the curl tracker tho? It would be easier to use the position of the wrist and elbow. If the wrist’s position is higher than the elbow’s position that counts as a curl.
Could definitely do that @Paul, wanted to showcase an alternate technique. Could also just ML to predict top of the curl and the bottom like in my gym video.
Hi Mr. (@nicholasrenotte) Ronetta, you are giving great examples. In this study, I tried to measure the shoulder, knee and ankle angles of a person shooting a basket from the side, but I could not. Can you share with us an example perceived from the side? Even if you don't have the time, thank you very much, even these studies increased my ability.👏👏👏
@@sobanrauf7649 yes model deployment onto cloud is possible using AWS, Azure or GCP with docker. Or another way is to build the whole app using the model and then deploy the app using the above mentioned ones. I would prefer way 2, now since I don't know Html, css or react, I make web apps with streamlit and other options and deploy them.
@@dipankarnandi7708 will it work same when uploaded to model and embeded to app as it is working here with open cv would i have to change tha code in model or upload same?
@@sobanrauf7649 you gotta use the cloud features here. Do this, first create the app and embed the model in it. See if it works, If it works there, then all you gotta do in containerize it with Docker / AWS Sagemaker to deploy wherever you want. Go step wise
@@NicholasRenotte I just altered the code and made a squats counter😁. I think for steps, you'd have to pick 2 points you'd call 'midstride' and 'full-stride' then estimate the angles. Use the same up and down logic: hip-knee-ankle > 160 degrees is a stride, hip-knee-ankle < 70 degrees is a midstride. Multiply by 2 at the end becuase only one leg is being tracked. I hope i haven't missed the mark entirely😅
Hello Nicholas, Great work doing all of these; it was enlightening. I wanted to know if you have done the multi-pose/ multi gym tracking you spoke of around 52:11. Thank you.
@@NicholasRenotte Thank you so much! Is there a way to estimate if specific key points in a video fed into the MediaPipe are missing? Say a video of me with just my head and shoulders, and I want to calculate the key point of my ankle. Is this possible?
Hey Nicholas, this is my first time using mediapipe. Why am I not getting the medipipe feed when I "print(results) in the "Make Detection section of the code (i am using Google Colab)?
Thanks for the amazing video... I'm just facing one problem, the image in the webcam appears to be a mirror image. I tired flipping the image but, the landmarks do not. It does not affect the arm but the landmarks stay at same place without flipping, so whenever I'm lifting my left arm my right hands landmarks are lifting away.
Sir, your presentation is really systematic and content is as always informative. Thank you so much.
Thank you so much @Winston, glad you enjoyed it!
this channel is such a gem tbh
Thanks so much @Sikander encosa, pumped you're enjoying it!
Mind blowing tutorial! Look forward for more similar projects, especially the one you talked about (counter for several exercises at once). Liked + subscribed + bell turned on!
When will that video come out
I decided to try and use mediapipe for a project I was doing but got discouraged by some of the documentation as I struggled to even install it. I then came to this tutorial and you made it look so easy and understandable! Thanks.
Did u get any os error trying to install media pipe?if so how did u fix it?
@@forgewhelbon1131 I didn't get any errors sorry. Good luck in sorting!
@MJ720 oh ok :( thnx tho
More content creators like you should exist
Thank you so much. Your teaching and tutorial style is the perfect match for me. Clear an precise, and you continuously explain everything you do, thanx!!
THANK YOU SO MUCH. YOU SAVED MY SENIOR DESIGN PROJECT
Ayyyyy, awesome work my guy!
very good video all info from the start and no complicated installations
Thanks so much @Ishan, tried to keep it as smooth as possible!
Please make more MediaPipe hands tracking projects, we love this content thank you so much
Oh you know it! Plenty more coming @Bank Crawpack Channel!
@@NicholasRenotte thank you so much
This is the best video I saw to use & test pose estimate. Thank you.
I've saved this vid for later, dude you are just amazing, i'm loving all the vids and the way you teach is really easy to understand, keep it up!
cheers from brazil :)
Thanks so much @zanniboni! Much love!
I needed to calculate the angle of my arm to my body and your video helped me a lot. Thank you very much. I hope you release more videos
I am learning python and its been week but i can totally follow you. Great work.
Many thanks for this tutorial ! The video is well done, your explanations are very clear, and the code ran directly on my computer without any modifications, which is quite rare !
Really powerful library of python and the teacher too.
Thank you dude!
You are the most underrated youtuber I have ever seen
I'm loving this video although I have one question
How do you do the exact same thing
But instead of a camera,
You do it with screen recording live?
Hope you or anyone else can help me
Cuz I'm really stuck
Could access a screen using pyautogui and use that as the video instead of the feed from opencv. Check this out www.thepythoncode.com/article/make-screen-recorder-python
@@NicholasRenotte thanks!
I just tweaked this to track my basketball shooting motion in just an hour!!! Thank you for the awesome tutorial 👍🏻👍🏻👍🏻
hey bro could you send me your github, id love to see how you did it, Im having trouble implementing the first block of code after I import everything and Im not sure what I am doing wrong
I’m interested in how/what you did. The reason I’m watching this is track my daughter’s softball fast pitch.
The initial idea was to identify what pattern she uses to get the strike. I thought I could monitor her movement, read the speed of the ball, and determine if it was in the strike zone.
Man, was I kidding myself. lol there is a lot to each of those tasks. Too much to accomplish all at once.
Excellent step-by-step video! Great work! So much help for my capstone!
I copied your code, but the angle is not displayed on the screen. I don't know what the problem is.
if you found a solution please tell me
Did you find a solution? I'm dealing with the same problem
@@kaviyak7308 did you find a solution?
and once again super thanks for doing this. You make it super easy for a beginner to understand stuffs. This was your second tutorial i tried (1st one being Object detection - 5hr course).
Man you're the greatest, this is just what I was looking for.
After MediaPipe Please do cover YOLO v5 /v4 ! I love your courses. They are really awesome! :)
Namaste sir
😂😂😂❤️❤️
Ayyyeee, you got it!
Yeah......this is great.....was waiting for this project
Awesome!! Stay tuned, another sweet one coming on Sunday!
here for the youtube algorithms. I will definitely watch it later tho. :)
Thanks so much @Mike, let me know what you think!
mythmon spotted :3
Thanks. This is amazing. Looking forward to the multi-pose tracker tutorial.
What a video Nich. I love data science and computer vision contents. Thank you
I will keep an eye on this since it will somehow become my master thesis! :)
Awesome stuff @Gabbosaur! What's your thesis on?
@@NicholasRenotte Basically it's a sort of personal trainer but more focused on calisthenics workout types. It checks the quality of your movement and re-schedules the workout program based on your performance.
@@Gabbosauro nice! Sounds awesome man!
Cool project Nick you are such a great coder
Thank you so much @Sangita, much appreciated!
Great video, I'm happy that i learnt something via implementing with proper guidance.
Great tutorial, you are such a great teacher.
thank you so much information learned and talking about the tutorial display with angle only ,why not displaying time beside the top corner of the output
Bro, you are an angel for me. I have the same FYP Body Posture detection correct count in my Gym App. You save me. Thanks, Bro.
Neat, I’m doing the same as well haha!
Love his videos he is such a great teacher!
@Muhammad Mehmaam did you open sources your code by any chance?
It depends on my colleagues, but I will try to upload in github.
@@zindamayat Thank you so much
Were you able to implement other workout movements?
Well explained. From Start to End.
Super cool and awesome Nick!.
Thanks so much @Henk!
Love your work it’s always on point and it teaches a lot for me as a beginner
Thanks so much @Mxolisi, super glad you enjoyed it!
Thank you so much Nicholas for this video, learned a lot from it.
Completed the tutorial and learned a looot!! Thank you very much :)
Thankyou so much for the tutorial including the notebook man!
As a beginner coder, I found this tutorial helpful and very easy to understand. There is only one part which I can not seem to grasp. I have gotten the code correct and even tried copy and pasting the one in the GitHub, but I am unable to get the angle to display on my screen. My assumption is that I have gotten the webcam dimensions incorrect (which you put as 640, 480), and I am not sure how to find out what the correct number should be.
Did you find the solution for it ??
Very very helpful. Glad I found it. Thank you ❤.
Super Project. Thanks for creating this @Nicolas
Awesome!!! Very interesting with lots of applications!
Agreed! Tons of stuff you can do with it!
MY MAN YOU ARE THE REAL ONE!!!
excellent video. i learned a lot of things about ML in this video Thank You so much.
Hi Nicholas! First of all thanks for amazing tutorial. It was really helpful. I love your tutorials so much you are explaining the import details in very understanding way. But I want to ask you a question. I did not really fully understand the angle calculation part and I could not find anything about it on the internet either. Could you please give a little bit more info about that part?
Definitely, take a look at this: manivannan-ai.medium.com/find-the-angle-between-three-points-from-2d-using-python-348c513e2cd
THIS TUTORIAL REALLY WORKS I AM FROM PHILIPP
Dude you are awesome. !!! Did the multi joint estimation video out, if its there cant find it out !!! I wish to make a yoga pose detection... Thanks in advance
This is amazing!!!! Do more videos of this pleaseee
it helps me a lot.
cheers from Indonesia!
great job and made very easily code so any one can learn quickly
omg love your Video thanks and like the way you code it is so clear
Great content, loving them
Your all tutorials are so amazing and productive. In this lesson, I want to just display the angle in a whole number, not a integer. How can we do this. By the way thankyou for creating a nice tutorial
Any advice on how someone would use this method to track treadmill steps? Awesome video, BTW, very engaging!
Excellent! Thanks Nicholas!
Really amazing tutorial. Keep it up. It really helped me a lot. Thank you 🙂
Hello?
Did you get the result?
Amazing job, man!
Keep it up!)
So much new information... Nick, I want to sleep at night😂😂😂😂
😂 wait til you see what's coming this weekend 😉
@@NicholasRenotte Oh my God😱 Scary to imagine. I'm looking forward to it.😁😁
@@sorochinsky yesss! Think I just wrapped up the code for it this morning!
@@NicholasRenotte This is very good, I just finished with it😁😁
I am infinitely happy😁😁 Now I can get the coordinates of the points. It is very cool. How many things you can think of.
@@sorochinsky yesss! Awesome work. I've got soooo many ideas planned for it!
I am studying data science here in mumbai, India. I've got to make a capstone project and this video inspired me to build a home workout app, still working on it. Thank you so much mate for all these videos, you are a HERO. One quick question : should i use movenet lightning or mediapipe pose shown in this video?
what do you want
go with mediapipe
I love it very much! Is it possible to calculate how accurate our poses are in order to make sure that our poses are correct comparing with gym trainer ? Thanks!!
Definitely, I'm working on a video on that rn @Tsang Wing Ho! Stay tuned!
Big salute for you work 🤩🤩
Amazing video Nicholas I was really amazed by your video ai gym tracker. What are all the workouts does this system could recognize or it measures using the position of coordinates.
Pretty good, my search end here
Man thank you so much for the wonderful work!
Hi, I really appreciate the effort you put in to break everything down and make it understandable. I want to ask a questions; what does 'image.flags.writeable = False' do exactly? I understand that it improves performance but how does it do this?
Many thanks :)
Interested in knowing also
Nicholas please make us video how to detect steps
this was a fun video cheers nick:)
YES YES YES!!! The video I more than needed. So helpful! Thank you!!!
If i wanted to use this method to calculate work done (WD= Force x Distance), how would you suggest I go about it. My thinking is that I'd assign a certain weight to either the joints or the line between joints and use that for force (signifying weight distribution in the body) then the model tracks the distance each joint or line moves throught space and multiplies to get work done. The individual work done per joint or line is summed and I'd have total work done by the body. Practicalizing this is what I can't seem to figure out.
I actually looked into this a while ago, it'd probably be something along the lines of angular force. Take an assumption of the mass of the arm, then calculate it's accleration (speed between frames) then assuming you know th length of the arm or use a proxy e.g. coordinate to coordinate euclidian distance...you should be able to do it!
You want to use that work done found as an estimate to Joules of energy used?
"Make Detections" and the parts after that doesn't work. I run the codes but nothing happens. Do you have any idea about that? Could you please help me? Thank you so much.
Great lecturer
Just trying to spread the coding love @Benya, glad you enjoyed it!
hii thanks your video help me finding the cordinates of joints
Amazing brother just end yesterdays i think we both are same path
Awesome! How did you find it @Ameer?
thanks a lot nick, i succeed follow the whole tutorial, and also add some more function lol just kidding haha i only manage to adding the other hands to working, so now i get two pairs of strong hands lol thanks a lot nick for ur help :)
Really Awesome !
@Nicolas Thanks it is such great help. I have 2 quick questions
>>>> why have you multiplied in angle code np.multiply(elbow, [640, 480]).astype(int)
>>>> how to resize the video frame if you can suggest some learning sources on same. It will be wonderful. Big thanks again man
Heya @Karamvir,
1. This is because we receive normalized landmarks we need to return them to our frame size to render them based on our baseline image
2. Can resize it by adding cv2.resize before the render :)!
@@NicholasRenotte can you give the landmark for all body part such as knee , shoulder to render in our frame
Thank you!!, Your video is really helpfull to me, When I make pushup counter, I subscribe you and push the good button!
Loved the video! Thanks for sharing bro :)
Thanks a ton @Andre 🙏
Try and use the pose estimation to drive a metahuman rig on unreal engine.
I need to learn so I can do this 😆
Super cool stuff.
Hahahah yesss, then apply it in rl! Got some rigging stuff planned!
@@NicholasRenotte lol right. heck yeah! cant wait!
@@CrypticSymmetry sweeet, will hit you up when it's out! At least the software bit
😉
Thanks for your great tutorials. Is there any tutorial, that shows how we can customize/modify or retrain the Mediapipe models?
Current models are trained for adult pose detection and have issues in pose detection in infants.
How can I improve it?
First of all this channel is amazing, I just found it!
Second of all, I have a question about the angle calculation. How's that it's independent from the camera angle? You're using projected 2d points to calculate an angle that actually exists in a 3d space, so the camera angle can change the end result a lot. Am I missing something?
Anyway thanks for the great content
Nope, you're correct, I simplified it by calculating it as though it were in a 2d space. You could refine the calc by adding depth to the angle formula as well though!
@@NicholasRenotte thanks for answering, I'll have that in mind for my project!
@@NicholasRenotte Can you provide an example of how to consider depth as well in the angle calculation?
@@harshpatil6168 I think you'll have to use another camera or use a library that also has 3D predictions even with one camera
@@harshpatil6168 Using kinect camera can solve this, it can detect depth
Great video! Why use angles for the curl tracker tho? It would be easier to use the position of the wrist and elbow. If the wrist’s position is higher than the elbow’s position that counts as a curl.
Could definitely do that @Paul, wanted to showcase an alternate technique. Could also just ML to predict top of the curl and the bottom like in my gym video.
@@NicholasRenotte thanks for clarifying
Sir , I am getting an error at angle part which you have mentioned as the angle is not showed when we on the webcam so can you help me for this ?
What's the error?
@@NicholasRenotteI have the same issue, it doesn’t show an error, but the text of the coordinates doesn’t appear in the camera
Hi Mr. (@nicholasrenotte) Ronetta, you are giving great examples. In this study, I tried to measure the shoulder, knee and ankle angles of a person shooting a basket from the side, but I could not. Can you share with us an example perceived from the side? Even if you don't have the time, thank you very much, even these studies increased my ability.👏👏👏
I think he mentioned that this method has limited potential particularly for a single joint. For multipose, try using other method
@@dipankarnandi7708 can u please guide if i can upload same model to cloud and use it in my android react native app
@@sobanrauf7649 yes model deployment onto cloud is possible using AWS, Azure or GCP with docker.
Or another way is to build the whole app using the model and then deploy the app using the above mentioned ones.
I would prefer way 2, now since I don't know Html, css or react, I make web apps with streamlit and other options and deploy them.
@@dipankarnandi7708 will it work same when uploaded to model and embeded to app as it is working here with open cv would i have to change tha code in model or upload same?
@@sobanrauf7649 you gotta use the cloud features here.
Do this, first create the app and embed the model in it. See if it works,
If it works there, then all you gotta do in containerize it with Docker / AWS Sagemaker to deploy wherever you want.
Go step wise
Hi , Thank you . Can you record a tutorial with the Hands model as well, and explain how to detect the left or right hand and points ?
Eran
👀 First part of this will be out tomorrow, left and right detection should be done on Sun!
Really Amazing. Can we apply the same logic for legs, to count the number of steps walked? Can you do a video how to count the number of steps walked.
Sure could!
@@NicholasRenotte I just altered the code and made a squats counter😁. I think for steps, you'd have to pick 2 points you'd call 'midstride' and 'full-stride' then estimate the angles. Use the same up and down logic: hip-knee-ankle > 160 degrees is a stride, hip-knee-ankle < 70 degrees is a midstride. Multiply by 2 at the end becuase only one leg is being tracked.
I hope i haven't missed the mark entirely😅
@@Uncle_Buchi Thank you for replying ❤️. I will try the logic and update you with the results ☺️
@@Bharath_Kumar234 sure thing! Best of luck
@@Uncle_Buchi Can you share the code that you made changes? I also wanted to do a squat rep counter but I failed.
awesome! quick question: how can I compare two different forms, one from a workout video and one from a person trying out their exercise?
great, can you combine mediapipe with yolo to detect multiple objects in 1 camera, i'm looking forward to it!
multi-gym tracking video soon? great content
Multi agent?
@@NicholasRenotte yes, when will that video be uploaded?
@@nakshatrasingh7853 hmmm, probably a while away, haven't tested it out yet!
@@NicholasRenotte great man, keep up the work
Dude, you are awesome!
Please make more MediaPipe tutorial specially hand gesture control, Please
Defs! Thought of a good idea for gesture based stuff yesterday @Nakul!
Hello Nicholas,
Great work doing all of these; it was enlightening. I wanted to know if you have done the multi-pose/ multi gym tracking you spoke of around 52:11.
Thank you.
There's a vid on the channel somewhere, I just did the estimation not the pose detection!
@@NicholasRenotte Thank you so much! Is there a way to estimate if specific key points in a video fed into the MediaPipe are missing? Say a video of me with just my head and shoulders, and I want to calculate the key point of my ankle. Is this possible?
hi , thanks for your video
i would ask to the learning source of calculating joint angels , it would help me alot thanks in advance :)
Mannnn, I can't remember where i got it from. It was from somewhere on the TF site.
Hey Nicholas, this is my first time using mediapipe. Why am I not getting the medipipe feed when I "print(results) in the "Make Detection section of the code (i am using Google Colab)?
Thank you Sir!!!
For such an awesome content🤠
Thank you so much @Vaishnavi!
Hi,
Can we use a depth camera and put real depth value (z value ) instead of its estimation as (result.pose_landmark.z)???
Hey you mentioned you will you a multi exercise tracker in another video, have you done that already?
Thank you for your kind sharing.
Anytime, so glad you liked it!
Thanks fort the great tutorial
Thanks for the amazing video... I'm just facing one problem, the image in the webcam appears to be a mirror image. I tired flipping the image but, the landmarks do not. It does not affect the arm but the landmarks stay at same place without flipping, so whenever I'm lifting my left arm my right hands landmarks are lifting away.
thanks man. love your video :)