AI Hand Pose Estimation with MediaPipe | Detect Left and Right Hand + Calculate Angles

Поділитися
Вставка
  • Опубліковано 27 січ 2025

КОМЕНТАРІ • 164

  • @bankcrawpackchannel6936
    @bankcrawpackchannel6936 3 роки тому +7

    I think you are the best tutorial videos on UA-cam. Keep going, sir.

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Thanks so much @Bank Crawpack Channel, super appreciated!

  • @grundtongrundton
    @grundtongrundton 3 роки тому +10

    To everyone struggling with the left right detection: this alternative get_label function might work for you:
    def get_label(index, hand, results):
    output = None
    if index == 0:
    label = results.multi_handedness[0].classification[0].label
    coords = tuple(np.multiply(
    np.array((hand.landmark[mp_hands.HandLandmark.WRIST].x, hand.landmark[mp_hands.HandLandmark.WRIST].y)),
    [cam_width,cam_height]).astype(int))
    output = label, coords
    return output
    if index == 1:
    label = results.multi_handedness[1].classification[0].label
    coords = tuple(np.multiply(
    np.array((hand.landmark[mp_hands.HandLandmark.WRIST].x, hand.landmark[mp_hands.HandLandmark.WRIST].y)),
    [cam_width,cam_height]).astype(int))
    output = label, coords
    return output

    • @BrunoSilva-rt1wz
      @BrunoSilva-rt1wz 2 роки тому +1

      The index of the "results.multi_handedness[x].classification[0].index" for right will always be 1 and for left will always be 0, what changes is the position of then inside the results.multi_handedness array, and your solution showed me that :)
      Thanks for that!

    • @BrunoSilva-rt1wz
      @BrunoSilva-rt1wz 2 роки тому +2

      Here's mine:
      def get_handedness(index, results, video_width, video_height):
      """
      Params:
      index = the positional index of the hand identified in the
      results.multi_hand_landmarks list. If two hands were detected
      for example, the hand in the second position of the array will
      have index 1, and the first index 0
      results = the output of mp.solutions.hands.Hands(...).process(image)
      video_width = the width of the video output. Usually gotten from
      cap.get(cv2.CAP_PROP_FRAME_WIDTH)
      video_height = the height of the video output. Usually gotten from
      cap.get(cv2.CAP_PROP_FRAME_HEIGHT)

      Outputs:
      (coordinates, handedness)
      - coordinates = coorditates of the wrist landmark
      - handedness = if the hand is 'left' or 'right'

      Observation: the results.multi_hand_landmarks is an array in which each
      element represents one hand, and each hand will have an array with 21
      coordinates of the hand landmarks. The results.multi_handedness is similar,
      is an array in which each element represents one hand and each hand will have
      a label with the handedness and the score of the classification.
      The relationship between both is based on the position of the hand in the array,
      for example the results.multi_hand_landmarks[0] will have the landmarks of the
      same hand in the results.multi_handedness[0]
      """
      output = None

      # Getting the wrist landmark coordinates
      wrist_landmark_normalized_coordinates = results.multi_hand_landmarks[index].landmark[mp_hands.HandLandmark.WRIST]
      wrist_landmark_coordinates = tuple(
      np.multiply(
      [wrist_landmark_normalized_coordinates.x, wrist_landmark_normalized_coordinates.y],
      [video_width,video_height]
      ).astype(int)
      )

      # Getting handedness label
      label = results.multi_handedness[index].classification[0].label
      score = round(results.multi_handedness[index].classification[0].score,2)
      output = (
      f'{label} {score}',
      wrist_landmark_coordinates
      )

      return output

    • @megawa7ed
      @megawa7ed Рік тому +1

      Thanks legend

  • @zubinjain9956
    @zubinjain9956 3 роки тому +2

    Thank you so much, you're a real life saver when it comes to learning the basics of pose detection.

  • @ojiya3863
    @ojiya3863 3 роки тому +4

    Thank you so much for this amazing tutorial!
    Sir, I have a suggestion for a better angle calculation.
    As we have 3D positions of all landmarks, I guess we can use vector products like this:
    a = np.array([hand.landmark[joint[0]].x, hand.landmark[joint[0]].y, hand.landmark[joint[0]].z]) # First coords
    b = np.array([hand.landmark[joint[1]].x, hand.landmark[joint[1]].y, hand.landmark[joint[1]].z]) # Second coords
    c = np.array([hand.landmark[joint[2]].x, hand.landmark[joint[2]].y, hand.landmark[joint[2]].z]) # Third coords
    radians = np.arctan2(np.linalg.norm(np.cross(a-b, c-b)), np.dot(a-b, c-b))
    angle = np.abs(radians*180.0/np.pi)
    if angle > 180.0:
    angle = 360 - angle
    cv2.putText(image, str(round(angle,2)), tuple(np.multiply([b[0],b[1]], resolution).astype(int)),
    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA)
    Thank you very much in advance.

  • @facundonieto1396
    @facundonieto1396 2 роки тому

    You are probably the best person ever in UA-cam

  • @AzaB2C
    @AzaB2C 11 місяців тому

    Thank you! You helped me add hand gestures to play Tetris and other stuff on my wood tile pixel display. Cheers!

  • @mauipomare3232
    @mauipomare3232 2 місяці тому

    wheres the liink ? you said its in the description at 2:11 ??

  • @girishkemba3865
    @girishkemba3865 3 роки тому +5

    This is amazing will watch while having food lol,
    Would love an in depth sign language interpreter using mediapipe.

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Yes, definitely! Will probably be a mega tutorial on it once I get the RNN sorted @Girish!

    • @girishkemba3865
      @girishkemba3865 3 роки тому +1

      @@NicholasRenotte Awesome as always!
      Big ups to the content on youtube.

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      @@girishkemba3865 thanks so much!

    • @gilachess
      @gilachess 3 роки тому +1

      @@NicholasRenotte Wow. That was in the realm of fantasy for me. If it can be made real, it would be simply amazing!!

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      @@gilachess oohhhh we're definitely going to do it!!

  • @gustavojuantorena
    @gustavojuantorena 3 роки тому +1

    Awesome tutorial Nick! 💪💪💪

  • @eranfeit
    @eranfeit 3 роки тому +2

    Hello Nicholas, Thank you for you effort . I did the first part of the video . Did you notice that the detection is not doing so well. If you trying to detect only the right hand the function does not working, If you raise both hand , it works but the detection is random . sometimes it detect the right as right and left as left and sometimes the opposite way . The good part it detects always both hand , but the detection not always correct. Did you notice ? Do you know how to fix it ?
    Eran

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      I did notice that @Eran, I've got to dig into it. Either the model is not as stable as I'd like it to be or I'm doing something wrong, got it on my to do list to dig into it!

    • @eranfeit
      @eranfeit 3 роки тому

      @@NicholasRenotte
      Thank you once again. I checked it myself and I believe it is not relevant to your code. This is the model behavior.

    • @N1nj0
      @N1nj0 3 роки тому

      in Classification the index for left is 0 and for right ist 1 but we always are looking for 0 so we get only the lable when left-hand is in the camera

  • @sakethgupta2885
    @sakethgupta2885 3 роки тому +1

    Thank you soo much! Hope you reach 1M subs soon

  • @kunalshah7639
    @kunalshah7639 2 роки тому

    Your videos are just so GREAT!!!!! Thank you so much for your amazing tutorials!

  • @gaurav2510parashar
    @gaurav2510parashar 3 роки тому +1

    #question Don't we need to multiply the coordinates with width and height, before calculating the angle, because we want the angle between the line segments as shown in the rendered image and not the normalized one.

  • @flioink
    @flioink 3 роки тому +1

    Just finished it - good one.
    I do it with a video so the functions had to take width and height arguments to account for the different videos dimensions.
    Took them in the main loop with
    "width = cap.get(3)"
    "height = cap.get(4)"
    Then passed them in the "getLabel" function like
    "coords = tuple(np.multiply(
    np.array((hand.landmark[mpHands.HandLandmark.WRIST].x,
    hand.landmark[mpHands.HandLandmark.WRIST].y)),
    [w, h]).astype(int))"

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Awesome work!!

    • @flioink
      @flioink 3 роки тому +1

      @@NicholasRenotte Thanks, your tutorials rock!

  • @freenomon2466
    @freenomon2466 2 роки тому +1

    thanks for the awesome video! Wanted to ask you, is there a way to get the orientation (angle) data of the hand/palm? The finger angles are great, but I can't find a way to get the hand angles (palms up or down, rotation of the wrist movement). Would appreciate any tips!

    • @rrplaygames2883
      @rrplaygames2883 Рік тому

      Hey, did you manage to get orientation of wrist. I am currently working on this and any tips/resources are much appreciated

  • @1212-t1o
    @1212-t1o 3 роки тому +3

    How do you get only the left or the right hand coords?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      You can extract them based on grabbing the coords from results.multi_hand_landmarks[INSERT_YOUR_HAND_NUM_HERE]

    • @1212-t1o
      @1212-t1o 3 роки тому

      @@NicholasRenotte Thank you very much!

  • @CrypticSymmetry
    @CrypticSymmetry 3 роки тому +2

    Awesome video! You going for some realtime hand Mo-cap? 😋

  • @wwhysoseriouss
    @wwhysoseriouss 3 роки тому +2

    first, thank you for your amazing videos.
    i have a question.
    How can i distinguish between palm and back of hand?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Ooooh, won't work for this model. Could probably create something custom for it (would suggest instance segmentation).

    • @grundtongrundton
      @grundtongrundton 3 роки тому +1

      I think If you take the label of the hand and the direction of thumb, you can build something for that. For example, if the camera sees a left hand where the position of the thumb is further to the right than the position of the pinky, then you can get the result, that you are looking at the inside of the left hand!

  • @rizkydermawan408
    @rizkydermawan408 2 роки тому

    Why you so genius sir??

  • @BrunoJantarada
    @BrunoJantarada 3 роки тому +4

    Hi Nicholas, first thank you for all this amazing videos you are doing. We are doing a prototype based on your videos, so we can recognize Portuguese Sign Language in real time. At this point we have a couple labels trained, but we are having some issues on recognition of sequences of word/labels, so we can detect multiple poses that will represent a specific sentence. At this point we are inserting array results from detected positions into a similarity Natural Language model that comes up with the most similar phrase and then produce it through voice. We currently have a specific label that allow system to know when to send it to "voice". Any tips on how we can achieve multiple pose detections that represent a phrase? We thought about some kind of LSTM, but we still haven't find the time to test it. Along with that we have a parallel neural network that is trying to achieve same we have in computer vision, but only with hands map positions. Please let us have your feedback on this. Thank you very much in advance and kind regards.

    • @mutaherkhan2161
      @mutaherkhan2161 3 роки тому +2

      I am working on pakistan sign language.

    • @BrunoJantarada
      @BrunoJantarada 3 роки тому +3

      @@mutaherkhan2161 sweet, have you faced same issue? How are you trying to solve it? Like, how you manage to get the model to recognize lets say 2 hand positions that represent a "sentence"? We are not yet able to recognize this sequences, but we have some ideas to deal with it. Any tips are also much appreciated. Thank you

    • @mutaherkhan2161
      @mutaherkhan2161 3 роки тому

      @@BrunoJantarada currently I am trying to interprete all the urdu alphabet by using fingerpose on the top of handpose. I have trained multiple CNN model on PSL datset but the model performance was not good on realtime.so I have decided to work on mediapipe handpose that make my job easy but still I am unable to interprete the sign that have almost same jesture.

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Hmmm, let me double check my understanding of the problem statement, so you're having trouble parsing multiple sets of poses and sending to voice? Could you try sending all results to your voice api in real time? Or possibly wait for a pose in pose detection (when accuracy threshold drops low) then send the call to the TTS api.

    • @BrunoJantarada
      @BrunoJantarada 3 роки тому +1

      @@NicholasRenotte thank you for your input. Currently we can detect hand pose associated lets say with "Hello" word. We defined a EOS (End of Sentence) pose that allow us to send that to voice in real time. That works good, now lets say you want to do a "Good day". That's a combination of multiple poses, for instance one pose will correspond to "Good", another one to "day". At this point model can only recognize the two poses separately. Basically issue is detecting sentences, since we can detect single poses without problem. In parallel we are also trying to achieve this result using only mediapipe hand coordinates instead of labeled images. In this case we are trying with a simple LSTM neural network. I'll try to post some videos during this week on my linkedIn and I can send you the entire project if you want. Let me have your feedback. Thank you once again :)

  • @yuriemond7340
    @yuriemond7340 3 роки тому +1

    Would you use the pose model if you want to do the same analysis but for the legs or the arms?

  • @JohnVeraLuzuriaga
    @JohnVeraLuzuriaga 3 роки тому +1

    Great videos Nicholas, Can you do actions when you close the hand? For example turn on light

  • @anjanikumar8145
    @anjanikumar8145 3 роки тому +1

    Please make a video on yolov5 using tensorboard

  • @omarhammami96oh
    @omarhammami96oh 2 роки тому

    Good Tutorial Thank you.
    I am wondering if this angle is right despite we didn't use any z coordinates, I beleive the rotation matrix in 3D is more complex than this. Maybe here we did calculate the angle of projected hand only...

  • @shakhzodbekyuldoshov6610
    @shakhzodbekyuldoshov6610 3 роки тому +2

    #question Thank you very much for your video tutorial. Code cannot detect right-hand pose without showing left hand. For example If I do not show my left hand but show right hand, it cannot detect the coords of my right hand. But when I show my left hand and then right hand pose is being detected. Why this is happening?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Hmm weird, let me test out on my machine and see if I'm getting the same.

    • @shakhzodbekyuldoshov6610
      @shakhzodbekyuldoshov6610 3 роки тому

      @@NicholasRenotte ok 👌

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      @@shakhzodbekyuldoshov6610 hmmm, weird I'm getting it too. Might be because of the indexing. Will dig into it a little more.

    • @ishandeshpande1455
      @ishandeshpande1455 3 роки тому +2

      @@NicholasRenotte I think for that we will need to write some extra code. if the frame detects only one hand, it should check which hand it is using multi_handedness. And if the frame detects 2 hands, we will run the usual code. Great tutorial btw!!

    • @subhamsarangi9783
      @subhamsarangi9783 3 роки тому +1

      @@ishandeshpande1455 can you please enlighten us on this?
      Thanks

  • @timtensor6994
    @timtensor6994 3 роки тому +1

    Very nice, I have seen some snapshots where people can interact with web objects based on hand movements , I wonder how it is done though . There must be some kind of library .

  • @kelvin4845
    @kelvin4845 2 роки тому

    where should I add the round off code display whole numbers only when angles of joints are display on live feed?

  • @marthalasamarasekharreddy4638
    @marthalasamarasekharreddy4638 3 роки тому +1

    with this can we have hand less keyboard by applying logic to one move indicates one key in the keyboard

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Yup, I believe you could, there's actually a demo of that on the MediaPipe documentation @Samara!

  • @Hade-Death
    @Hade-Death Рік тому

    Yo, when printing the coordinate of wirst i am etting this error.
    print(results.multi_hand_landmarks[0].landmark[mpHands.HandLandmark.WRIST])
    TypeError: 'NoneType' object is not subscriptable
    Please help.

  • @janyiren8463
    @janyiren8463 2 роки тому

    What is Z here? Do you use depth camera?

  • @elifhanci7483
    @elifhanci7483 2 роки тому

    hey, I want to ask a question about the confidence score for each joint. Mediapipe doesn't provide these scores but is there a way to get these confidence scores for each joint of a hand??

  • @imanefahim8557
    @imanefahim8557 3 роки тому +1

    How to extract all keypoints in once as done in Action Recognition with Mediapipe holistic Model?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Same process except just do it for the left_hand and right_hand keypoints!

  • @user-vp9yt3yq5l
    @user-vp9yt3yq5l 3 роки тому

    Great video! #question I have managed to get the code working but what would I need to include in the code to actually be able to print the angles instead of them only appearing on the handpose. For example, I need to be able to actually see all the values in a list

  • @ramyadevinataraj315
    @ramyadevinataraj315 3 роки тому +1

    Hi sir, I doubt how can we achieve recognition of moving hands from left to right. Like sliding( if we are showing our hand and moving or sliding our hand to left it should recognize the hand is moving left or if a hand is moving towards the right, it should recognize right)

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Like directional tracking?

    • @ramyadevinataraj315
      @ramyadevinataraj315 3 роки тому

      @@NicholasRenotte Yes sir, Just the model should recognize the direction of hand moving. If your hand is moving towards the left then the sliding is left, if it's moving towards the right then the sliding is right.

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      @@ramyadevinataraj315 ah got it, you could implement tracking and calculate coordinate change for the detected object!

    • @ramyadevinataraj315
      @ramyadevinataraj315 3 роки тому

      @@NicholasRenotte Thank you

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      @@ramyadevinataraj315 anytime!

  • @wgalloPT
    @wgalloPT 2 роки тому

    Yes, this is amazing with a small little issue I would like you to help me: as an example im going to cite 8, 7, 6 angle. It shows 179 degs almost 180 which it makes sense, scientifically, but not anatomically (physiologically), as in anatomy, that angle should be measured to be 0 degrees. When you flex (bend) your finger it grows from 0, 5, 10, etc degrees. So if that is the case, what do I need to change in the equation? if angle> 180.0: angle = 360-angle (this is what you have). What are the changes I would have to do??? Thank you, very good teacher!!!

  • @rrplaygames2883
    @rrplaygames2883 Рік тому

    Hey Nick, great video! I want to use this detection and overlay 3d models of rings,bracelets for virtual try on. Can you please point me to some resources that can be useful for 3d objects overlaying in python. I am currently able to detect hand as desired but not sure how to proceed from here. Can pyopengl or some other library be useful for this. Please let me know. Thanks

  • @yousseffarhan8901
    @yousseffarhan8901 3 роки тому +1

    Hello Nicholas, I hope you are doing well. I have a quick question regarding GPU, how do I verify that my computer has a GPU? Because I did all the steps you do (install CUDA, CuDNN ...) but still treated with CPU! And thank you so much

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Heya @Youssef, open up your task manager, then select the Performance tab. Towards the bottom you should have a monitor for your GPU if there is one installed.

    • @yousseffarhan8901
      @yousseffarhan8901 3 роки тому +1

      @@NicholasRenotte thank you very much 🙏🏼 its so helpful

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      @@yousseffarhan8901 anytime! Glad you enjoyed it!

  • @bellemarravelo6002
    @bellemarravelo6002 3 роки тому +1

    Hi sir, do you have tutorial about real time object detection related on train set and test data with average the accuracy?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Heya @Bellemar Ravelo, I think we do evaluation in the 5hr tutorial. Did you check that out?

  • @ImpulseFusion
    @ImpulseFusion 3 роки тому +1

    Thank you for great tutorial ,
    Sir how can we use MediaPipe in recognising sign language .

  • @johnkang6088
    @johnkang6088 3 роки тому +1

    Is there any way you can create an action detector with tfjs?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Ya, would likely need to implement an RNN layer or have windowed data!

  • @gestualy
    @gestualy Рік тому

    Fantastic!!!! Gestualy power

  • @haditamimi2891
    @haditamimi2891 3 роки тому +1

    is it possible to find z coordinates and calculate angles using x,y,z?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Sure can!

    • @haditamimi2891
      @haditamimi2891 3 роки тому

      @@NicholasRenotte thanks, I am trying to finding Z but it keeps giving me 0, can you please give me a example code?

    • @haditamimi2891
      @haditamimi2891 3 роки тому

      @@NicholasRenotte at the end wanna find angles in 3d(using x,y,z) but i don't i have correct results cause working with z is challenging, is there any code to help me?

    • @NeP516
      @NeP516 3 роки тому

      @@haditamimi2891 Remember that mediapipe's Z axis is centered at the wrist. The wrist has always a 0 coordinate in Z!

  • @hsnrsd3468
    @hsnrsd3468 2 роки тому

    you are awesome, dude. Thank you so much.

  • @AdiMehaindroo
    @AdiMehaindroo 3 роки тому +1

    Bro please can you tell me that if I want these landmarks to display on a np.zeros matrix then how can I display it

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Heya @Adi, could probably create a blank numpy array, then use the visualisation library to draw the landmarks and convert to grayscale! Lmk if you need a deeper dive!

  • @priyanshugarg6175
    @priyanshugarg6175 Рік тому

    Hey Nick, great video. Would we be able to extract palmprint using mediapipe? If possible would you be kind enough to make a video on the same ?

  • @meethansaliya4885
    @meethansaliya4885 3 роки тому +1

    hey , in get_label function and in this line "for idx, classification in enumerate(results.multi_handedness):" where did you use "idx" in code actually i wrote the same code but it cannot classified both hand at same time. any help regarding this will be appreciated. thanks :)

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Are multiple hands detected?

    • @meethansaliya4885
      @meethansaliya4885 3 роки тому

      @@NicholasRenotte yes detected but they don’t label at a same time.

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      @@meethansaliya4885 hmmm not too sure unfortunately without looking into it in more detail.

  • @elcorreodesteven
    @elcorreodesteven 2 роки тому

    Thank you very much friend, your video is great.
    I have a question, how can I see the position angle of the wrist?

    • @rrplaygames2883
      @rrplaygames2883 Рік тому

      Hi, did you managed to get orientation for wrist? I am currently working on it and any tips/resources are much appreciated.

    • @elcorreodesteven
      @elcorreodesteven Рік тому

      @@rrplaygames2883
      Sure friend I'm going to show you what I did, KEEP IN MIND THAT IT'S NOT VERY EXACT but for what I needed (I just wanted to move a servomotor to the side that moves my wrist)
      def wrist_angle(image):
      # I fixed the point where the camera is located, the upper middle point of the screen
      a = np.array([1000,0])
      # I take nodes 5 (INDEX_FINGER_MCP) and 9 (MIDDLE_FINGER_MCP)
      b = np.array([hand.landmark[9].x, hand.landmark[9].z]) # seugunda coordenada
      c = np.array([hand.landmark[5].x, hand.landmark[5].z]) # tercera coordenada
      # radian calculation
      y1 = c[1]-b[1]
      y2 = a[1]-b[1]
      x1 = c[0]-b[0]
      x2 = a[0]-b[0]
      radians = round((np.arctan2(y1,x1) - np.arctan2(y2,x2))*2,1)
      if radians < 0:
      radias = 0
      wrist_ang = 0
      else:
      # convert to degrees
      wrist_ang = np.abs(radians * 180.0 / np.pi)
      # Since the angle should not be greater than 180 then we condition
      if wrist_ang > 180.0:
      wrist_ang = 360.0 - wrist_ang
      coords = tuple(
      np.array((hand.landmark[mp_hands.HandLandmark.WRIST].x*640 + 20,
      hand.landmark[mp_hands.HandLandmark.WRIST].y*480 + 20)).astype(int))
      cv2.putText(image, str(round(wrist_ang,0)),
      coords,
      cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 1, cv2.LINE_AA)
      return image

    • @rrplaygames2883
      @rrplaygames2883 Рік тому +1

      @@elcorreodesteven Thanks a lot! This is super helpful!

  • @julianlai1953
    @julianlai1953 3 роки тому

    how do you know which second of time the landmarks are being captured? is there a way to tie the landmarks to its captured time?

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Ah, yeah, I would output based on relative time. So save the video and just calculate elapsed time then output to a CSV. Probs too long to explain in a comment, want a video on it?

  • @hsnrsd3468
    @hsnrsd3468 2 роки тому

    please create multiple angle for Pose landmarks. thanks in advance

  • @ameerazam3269
    @ameerazam3269 3 роки тому +1

    again best ever

  • @wgalloPT
    @wgalloPT 2 роки тому

    RENOTTE Please kill my doubt!!!! How can we do to instead of showing 180 degrees when fingers are straight, show 0 degrees and then grow towards 180 when they bend ?????

  • @isidorastevanovic8029
    @isidorastevanovic8029 Рік тому

    Thank you for an amazing tutorial! If I were to calculate both the pitch and the yaw angles from the coordinates, do you maybe know how I could do that? Thank you in advance for helping! :)

    • @rrplaygames2883
      @rrplaygames2883 Рік тому

      Hi, were you able to figure out how to calculate roll,pitch and yaw angles?

  • @luvpodcast9763
    @luvpodcast9763 3 роки тому +1

    I learn html and CSS and JavaScript how I am get start my career in AI please guide me sir

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Check this out: ua-cam.com/video/oLpBGtY-_sI/v-deo.html

  • @rupendrakrishnaraavi4217
    @rupendrakrishnaraavi4217 3 роки тому

    Hi nicholas I tried to build a custom model using hand mediapipe when I try the export the landmarks csv file looks blank
    pose = results.multi_hand_landmarks[0].landmark -This is what I tried to give as an input
    as well as
    pose = results.multi_hand_landmarks - This also I tried
    When I remove the try and except
    the error that appears is
    'NormalizedLandmark' object is not subscriptable
    How do I get coordinates exported help me with this trying for hours not able to

    • @rupendrakrishnaraavi4217
      @rupendrakrishnaraavi4217 3 роки тому

      Hi Nicholas, please help me with this

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      Heya @Rupdenra, this is likely because there are no hands detected in the frame. Unfortunately mediapipe doesn't handle this gracefully so you need to check if hands are in the frame before attempting to subscript them.

    • @rupendrakrishnaraavi4217
      @rupendrakrishnaraavi4217 3 роки тому

      @@NicholasRenotte Hi the above issue is solved but then its telling me while I fit the model that the values are near to nan

    • @imanefahim8557
      @imanefahim8557 3 роки тому

      @@rupendrakrishnaraavi4217 Hey did you manage to export the landmarks?

  • @al_swhoolname5241
    @al_swhoolname5241 3 роки тому

    Can you make this program with Unity game engine? We need a video to connect this with Unity. Please.

  • @mohammadhaqqi
    @mohammadhaqqi 3 роки тому +1

    please do mediapipe detection on GPU

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Yah, check this out: google.github.io/mediapipe/getting_started/gpu_support.html

    • @mohammadhaqqi
      @mohammadhaqqi 3 роки тому

      ​@@NicholasRenotte I tried but was not successful. Please help post a video about it. With thanks and greetings

  • @philtoa334
    @philtoa334 3 роки тому +1

    yes.

  • @brucewayne9708
    @brucewayne9708 3 роки тому +2

    Dude please do sign language detection with mediapipe . Please 🥺🥺

    • @ahmedhabeeb3166
      @ahmedhabeeb3166 3 роки тому

      +1

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому +1

      Yah! Got it planned!

    • @brucewayne9708
      @brucewayne9708 3 роки тому +1

      @@NicholasRenotte thank you very much. You are the best.

    • @NicholasRenotte
      @NicholasRenotte  3 роки тому

      @@brucewayne9708 anytime! Anything for batman ;)

    • @brucewayne9708
      @brucewayne9708 3 роки тому

      @@NicholasRenotte haha. This may be helpful nsiddharthasharma.medium.com/alphabet-hand-gestures-recognition-using-media-pipe-4b6861620963

  • @artemklyuev5822
    @artemklyuev5822 Рік тому

    34:57 is screamer. I was in headphones🤪

  • @joshua_dlima
    @joshua_dlima 10 місяців тому

    thanks a tonne

  • @ashutoshbabras3715
    @ashutoshbabras3715 Рік тому

    its hard to understand what you did and why you did so when you use jupiter notebook
    please either don't copy your code multiple times or use proper ide like pycharm or spyder. it'll be really helpful for us, to understand more easily.

  • @hcos8139
    @hcos8139 2 роки тому

    Not really beginner friendly